Re: [FFmpeg-devel] [PATCH] avcodec/mjpegenc: disable unused code with AMV

2017-08-09 Thread Davinder Singh
On Thu, Aug 10, 2017 at 6:59 AM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Wed, Aug 09, 2017 at 07:46:30AM +, Davinder Singh wrote:
> > hi,
> >
> > this disables unused function amv_encode_picture() when AMV encoder is
> > disabled (and mjpeg enabled).
> > silences this warning:
> > CC libavcodec/mjpegenc.o
> > libavcodec/mjpegenc.c:351:12: warning: unused function
> 'amv_encode_picture'
> > [-Wunused-function]
> > static int amv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
> >    ^
> >
> > Patch attached.
> >
> > Regards.
> > --
> > Davinder Singh
>
> >  mjpegenc.c |   10 +-
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 0fe583bfdb304ce3d8881e9836cc1983c65e3a90
> 0001-avcodec-mjpegenc-disable-unused-code-with-AMV.patch
> > From cadf679bb0ad6d09d451512238e790645262f2f8 Mon Sep 17 00:00:00 2001
> > From: Davinder Singh <ds.mud...@gmail.com>
> > Date: Wed, 9 Aug 2017 13:01:07 +0530
> > Subject: [PATCH] avcodec/mjpegenc: disable unused code with AMV
> >
> > disable unused amv_encode_picture() when AMV encoder is not configured.
> > minor formatting improvement.
> > ---
> >  libavcodec/mjpegenc.c | 10 +-
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/libavcodec/mjpegenc.c b/libavcodec/mjpegenc.c
> > index ee77cde8cb..e6cdaf6376 100644
> > --- a/libavcodec/mjpegenc.c
> > +++ b/libavcodec/mjpegenc.c
> > @@ -39,7 +39,6 @@
> >  #include "mjpeg.h"
> >  #include "mjpegenc.h"
> >
> > -
> >  static int alloc_huffman(MpegEncContext *s)
> >  {
> >  MJpegContext *m = s->mjpeg_ctx;
>
> please move  unrelated cosmetic chages into a seperate patch
>
> Patches attached.

[...]
-- 
Davinder Singh


0001-avcodec-mjpegenc-disable-unused-code-with-AMV.patch
Description: Binary data


0002-avcodec-mjpegenc-cosmetic-changes.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avcodec/mjpegenc: disable unused code with AMV

2017-08-09 Thread Davinder Singh
hi,

this disables unused function amv_encode_picture() when AMV encoder is
disabled (and mjpeg enabled).
silences this warning:
CC libavcodec/mjpegenc.o
libavcodec/mjpegenc.c:351:12: warning: unused function 'amv_encode_picture'
[-Wunused-function]
static int amv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
   ^

Patch attached.

Regards.
-- 
Davinder Singh


0001-avcodec-mjpegenc-disable-unused-code-with-AMV.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [DISCUSSION] Motion Estimation API/Library

2017-08-03 Thread Davinder Singh
On Tue, Aug 1, 2017 at 7:40 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> [...]
>

Keeping where the code lives, aside,

main thing is API, so we need to talk about it.
-- 
Davinder Singh
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [DISCUSSION] Motion Estimation API/Library

2017-08-01 Thread Davinder Singh
Hi Nicolas,

On Tue, Aug 1, 2017 at 11:57 AM Nicolas George <geo...@nsup.org> wrote:

> Le quartidi 14 thermidor, an CCXXV, Davinder Singh a écrit :
> > As we've been planning since forever (
> > https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/snow.h#L182,
> > http://ffmpeg.org/pipermail/ffmpeg-devel/2016-July/197095.html) we need
> > Motion Estimation code that could be shared in codecs and motion filters.
> >
> > The idea is to make the Motion Estimation idependent of Encoders more
> > specifically - AVCodecContext.
>
> This is a very good idea.
>
> > So, I’ve moved motion estimation and me_cmp code to a new location -
> > libmotion <https://github.com/dsmudhar/FFmpeg/tree/gsoc17/libmotion>. I
> > think it’s a good idea to make a new lib instead moving it to
> > libavutil (as discussed
> > previously <
> http://ffmpeg.org/pipermail/ffmpeg-devel/2016-July/197161.html>).
> > That way we can make it independent of everything else in FFmpeg.
>
> But this is not. Please no, not yet another library!
>
> A separate library like that will at the beginning only be used by the
> handful of hard-core developers. Unless it meets a wide success very
> fast, with very useful tools available immediately, it will soon be
> forgotten ("seems interesting, but not yet mature, I'll come back and
> see in six months") and start bitrotting as soon as you have moved to
> something else.
>
> In particular, the main policy of FFmpeg is to not depend on external
> libraries for core features. Therefore, if your project is a separate
>

Just to be clear, it won't be "external" library like OpenCV...


> library, it will definitely not be used by FFmpeg codecs like you wish
> in your first paragraph.
>
> If you want a fighting chance of a project that is not stillborn, I
> think you need to make it part of FFmpeg, and make sure important
>

.. it will be part of FFmpeg like libavfilter, just a new module -
libmotion.


> components of FFmpeg use it as soon as possible.
>
> Regards,
>
> --
>   Nicolas George
> [...]


Regards,
-- 
Davinder Singh
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [DISCUSSION] Motion Estimation API/Library

2017-07-31 Thread Davinder Singh
Hello everyone,

As we've been planning since forever (
https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/snow.h#L182,
http://ffmpeg.org/pipermail/ffmpeg-devel/2016-July/197095.html) we need
Motion Estimation code that could be shared in codecs and motion filters.

The idea is to make the Motion Estimation idependent of Encoders more
specifically - AVCodecContext.
So, I’ve moved motion estimation and me_cmp code to a new location -
libmotion <https://github.com/dsmudhar/FFmpeg/tree/gsoc17/libmotion>. I
think it’s a good idea to make a new lib instead moving it to
libavutil (as discussed
previously <http://ffmpeg.org/pipermail/ffmpeg-devel/2016-July/197161.html>).
That way we can make it independent of everything else in FFmpeg.
Everything will be accessed through single context AVMotionEstContext, and
also be initialized through it. This API could also be used by researchers
to test new codecs or FRUC. And additionally, we can implement best Optical
Flow methods: http://vision.middlebury.edu/flow/eval/results/results-i1.php in
the future which are not available in OpenCV.

GOALS:
1. Make me_cmp_func()
<https://github.com/dsmudhar/FFmpeg/blob/gsoc17/libmotion/me_cmp.h#L48>
independent of MpegEncContext and AVCodecContext. To accomplish this, we
can move following from MpegEncContext to MotionEstContext or[*]
MECmpContext (as these are used in cmp fxn in me_cmp.c
<https://github.com/dsmudhar/FFmpeg/blob/gsoc17/libmotion/me_cmp.c>):
int mb_intra;
PixblockDSPContext pdsp;
FDCTDSPContext fdsp;
IDCTDSPContext idsp;
int (*fast_dct_quantize)();
void (*dct_unquantize_inter)();
void (*dct_unquantize_intra)();
int block_last_index[12];
int qscale;
ScanTable intra_scantable;
int ac_esc_length;
uint8_t *intra_ac_vlc_length;
uint8_t *intra_ac_vlc_last_length;
uint8_t *inter_ac_vlc_length;
uint8_t *inter_ac_vlc_last_length;
uint8_t *luma_dc_vlc_length;
MECmpContext mecc; [*]

As pointed by michaelni earlier, these are mostly more used by Encoders
than ME. If libavcodec is more appropriate place for some of these (e.g.
pdsp, fdsp) and these are also used by few cmp functions, like
FF_CMP_BIT, FF_CMP_RD, FF_CMP_DCT, FF_CMP_PSNR,
we can move these to specific encoders and set function address in Context
before using them. Or we can register cmp functions via by calling API
while initializing FFmpeg/Options. Only generic functions will be part of
libmotion.

2. Initialization of MECmpContext at one place (like init pdsp and dct if
moved in MECmpContext).
3. Make motion_est.c
<https://github.com/dsmudhar/FFmpeg/blob/gsoc17/libmotion/motion_est.c>
code independent of MpegEncContext. for this a lot of things need to be
moved from MpegEncContext to MotionEstContext. MpegEncContext is used at
lot of places. Please suggest your approach on this. Generic ME should
surely be seperated.

Please share your views on this.

Regards,
-- 
Davinder Singh
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] minterpolate: added codec_me_mode

2017-06-02 Thread Davinder Singh
On Fri, May 26, 2017 at 2:49 PM Paul B Mahol <one...@gmail.com> wrote:

> On 5/26/17, Michael Niedermayer <mich...@niedermayer.cc> wrote:
> > On Mon, May 08, 2017 at 07:40:25PM +, Davinder Singh wrote:
> >> hi,
> >>
> >> On Mon, Apr 24, 2017 at 9:43 PM Paul B Mahol <one...@gmail.com> wrote:
> >>
> >> > On 4/24/17, Davinder Singh <ds.mud...@gmail.com> wrote:
> >> > > Patch attached.
> >> > >
> >> >
> >> > So this encodes video frames to generate motion vectors?
> >> >
> >>
> >> yes. it significantly improves the frame quality. can please you test
> it?
> >>
> >> the filter will be made independent of libavcodec.
> >
> > In the absence of further comments, i intend to apply this patch
> > (unless i spot some issue)
>
> Shouldn't this be applied after its extracted from snow encoder?
>
> Having filter to encode video just to get motion vectors seems quite bad
> for me.
>

i agree with you.
it should be extracted first. i'm on it.

___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] minterpolate: added codec_me_mode

2017-05-08 Thread Davinder Singh
hi,

On Mon, Apr 24, 2017 at 9:43 PM Paul B Mahol <one...@gmail.com> wrote:

> On 4/24/17, Davinder Singh <ds.mud...@gmail.com> wrote:
> > Patch attached.
> >
>
> So this encodes video frames to generate motion vectors?
>

yes. it significantly improves the frame quality. can please you test it?

the filter will be made independent of libavcodec.

[...]
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] minterpolate: added codec_me_mode

2017-04-24 Thread Davinder Singh
Patch attached.


0001-minterpolate-added-codec_me_mode.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-29 Thread Davinder Singh
On Mon, Aug 29, 2016 at 12:20 PM Clément Bœsch  wrote:

> On Sun, Aug 28, 2016 at 11:31:10AM +0200, Paul B Mahol wrote:
> > On Sat, Aug 27, 2016 at 2:45 PM, Robert Krüger  >
> > wrote:
> > >
> > > what is the way to best contribute with test cases? I have two samples
> that
> > > I use for testing, so far the results look very, very promising but
> there
> > > are still a few artefact problems, so these could maybe serve as a good
> > > test case. In some cases the artefacts almost certainly look like
> there is
> > > a bug in motion vector calculation as a very large area suddenly
> begins to
> > > move in which really only a small part is/should be moving.
> > >
> > > How do I make this available to you or other devs at this stage? Just
> trac
> > > tickets or is it too early for that and you would like to work on this
> > > differently? After all it is always a grey area, when this can be
> > > considered solved, as it is a process of gradual improvements, so maybe
> > > it's not well-suited for a ticket.
> > >
> > > Let me know. Happy to contribute samples and some testing time here and
> > > there.
> >
> >
> > You can provide them either publicly or privately to any of devs
> interested.
> > I'm always interested in short samples exhibiting the problem.
>
> Using http://b.pkh.me/sfx-sky.mov and comparing:
>
>   ./ffplay -flags2 +export_mvs sfx-sky.mov -vf codecview=mv=pf
>
> with
>
>   ./ffplay sfx-sky.mov -vf mestimate,codecview=mv=pf
>
> The encoded mvs looks much more meaningful that the ones found with
> mestimate. Typically, if you're looking for a global motion of some sort,
> the "native" mvs makes much more clear that there is a mostly static area
> at the bottom and a panning one on top with its direction pretty obvious.
> With mestimate, it just looks like small noise. Any plan to improve this?
>
> --
> Clément B
>

that's probably because the mestimate doesn't use penalty term as used in
minterpolate and encoders to make the motion field smooth. mestimate just
stores the best match. it can be easily done by adding
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_minterpolate.c#L274
to
default function
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/motion_estimation.c#L59

The reason I didn't do that yet, we've plans to make Motion Estimation API
and the cost function doesn't seem to be correct place for penalty term.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-29 Thread Davinder Singh
On Sat, Aug 27, 2016 at 6:15 PM Robert Krüger 
wrote:

> [...]
> what is the way to best contribute with test cases? I have two samples that
> I use for testing, so far the results look very, very promising but there
> are still a few artefact problems, so these could maybe serve as a good
> test case. In some cases the artefacts almost certainly look like there is
> a bug in motion vector calculation as a very large area suddenly begins to
> move in which really only a small part is/should be moving.
>
> How do I make this available to you or other devs at this stage? Just trac
> tickets or is it too early for that and you would like to work on this
> differently? After all it is always a grey area, when this can be
> considered solved, as it is a process of gradual improvements, so maybe
> it's not well-suited for a ticket.
>
> Let me know. Happy to contribute samples and some testing time here and
> there.
>

I'm currently testing support for unrestricted search area which can be
used with EPZS, which has improved the quality.
Once I send the patch you can test if it actually reduces the artifacts or
doesn't make it worse.

For smaller details newer recursive algorithms should perform better. Like
this one, https://www.osapublishing.org/jdt/abstract.cfm?uri=jdt-11-7-580
which uses Modified 3D recursive search iteratively.
So, at this point before any new algorithm is implemented, best way to test
is to verify the experiments I do improves the quality for most of the
samples or not.

One would like to compare PSNR, as it's hard compare each frame visually.
http://ffmpeg.org/pipermail/ffmpeg-devel/2016-April/193067.html (for better
results, original sample should be 60fps, subsampled to 30)
for visual testing, I used to transcode interpolate sample to images and
compared them to original ones.

Thanks for testing.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] avfilter: vf_minterpolate: rename chroma log vars

2016-08-29 Thread Davinder Singh
On Mon, Aug 29, 2016 at 8:59 PM Paul B Mahol  wrote:
[...]

And using avcodec* stuff is in lavfi is not needed.

uses AVPixFmtDesc now. should I also not use libavcodec/mathops.h? median
of 3 function is shared. can it be moved?

updated patch attached.


0001-avfilter-vf_minterpolate-rename-chroma-log-vars.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] MAINTAINERS: add myself for Motion Estimation and Interpolation filters

2016-08-28 Thread Davinder Singh
On Sun, Aug 28, 2016 at 5:23 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Fri, Aug 26, 2016 at 09:34:39PM +, Davinder Singh wrote:
> > patch attached.
>
> >  MAINTAINERS |5 +
> >  1 file changed, 5 insertions(+)
> > 51ccc94a6198acc33e77db17bf973e88f63801f5
> 0001-MAINTAINER-add-myself-for-Motion-Estimation-and-Inte.patch
> > From 8bd7fa5d13b1e1ffda957656b482d55933557d42 Mon Sep 17 00:00:00 2001
>
> > From: dsmudhar <ds.mud...@gmail.com>
>
> btw, forgot to mention last time but please fix the configured
> name in your git config unless you want to list dsmudhar instead of
> your name as author
>

done.

thanks
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/2] avfilter: vf_minterpolate: fix green line issue

2016-08-28 Thread Davinder Singh
hi,

the mv can be negative. right shifting rounds off it to -ve infinity.
changed to division, doesn't seem to affect speed at all.
this fixes the green line that appear on top or left in bidir mode. also
improved the quality by +0.08dB


thanks,
DSM_


0002-avfilter-vf_minterpolate-fix-green-line-issue.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/2] avfilter: vf_minterpolate: rename chroma log vars

2016-08-28 Thread Davinder Singh
hi,

this rename confusing chroma variables to one used in AVPixFmtDescriptor.
more consistent.
also removed some useless vars from context.

thanks
DSM_


0001-avfilter-vf_minterpolate-rename-chroma-log-vars.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avfilter/motion_estimation: Fix pre processor formating

2016-08-26 Thread Davinder Singh
On Sat, Aug 27, 2016 at 2:26 AM Michael Niedermayer 
wrote:

> On Fri, Aug 26, 2016 at 04:00:09PM -0300, James Almer wrote:
> > On 8/26/2016 3:19 PM, Michael Niedermayer wrote:
> > > IIRC, The spaces are not standard before the #
> >
> > We use them sometimes when nesting several preprocessor checks,
>
> spaces after the # are standard IIRC, before they are not but maybe
> i misremember
>
>
> > but in this case yes, it should have no spaces.
> >
> > >
> > > Signed-off-by: Michael Niedermayer 
> > > ---
> > >  libavfilter/motion_estimation.c | 6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/libavfilter/motion_estimation.c
> b/libavfilter/motion_estimation.c
> > > index fa6f49d..0f9ba21 100644
> > > --- a/libavfilter/motion_estimation.c
> > > +++ b/libavfilter/motion_estimation.c
> > > @@ -262,10 +262,10 @@ uint64_t ff_me_search_ds(AVMotionEstContext
> *me_ctx, int x_mb, int y_mb, int *mv
> > >  x = mv[0];
> > >  y = mv[1];
> > >
> > > -#if 1
> > > +#if 1
> > >  for (i = 0; i < 8; i++)
> > >  COST_P_MV(x + dia2[i][0], y + dia2[i][1]);
> > > -#else
> > > +#else
> > >  /* this version skips previously examined 3 or 5 locations
> based on prev origin */
> > >  if (dir_x <= 0)
> > >  COST_P_MV(x - 2, y);
> > > @@ -286,7 +286,7 @@ uint64_t ff_me_search_ds(AVMotionEstContext
> *me_ctx, int x_mb, int y_mb, int *mv
> > >
> > >  dir_x = mv[0] - x;
> > >  dir_y = mv[1] - y;
> > > -#endif
> > > +#endif
> > >
> > >  } while (x != mv[0] || y != mv[1]);
> > >
> > >
> >
> > LGTM, assuming the disabled code has a reason to be there.
>
> yes its probably faster
>

yes, it is.

enabled code:
frame= 4689 fps= 51 q=-0.0 Lsize=N/A time=00:03:07.65 bitrate=N/A
speed=2.06x
real 1m31.239s
user 1m34.895s
sys 0m0.617s

disabled code:
frame= 4689 fps= 69 q=-0.0 Lsize=N/A time=00:03:07.65 bitrate=N/A
speed=2.75x
real 1m8.279s
user 1m12.039s
sys 0m0.553s

used this command for testing:
time ./ffmpeg -i ../matrixbench_mpeg2.mpg -vf mestimate=method=ds -f null -

i'll send patch along with optimizations for others, same can be done for
ntss or maybe hex also.


> applied
>
> thx
>
> [...]
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] MAINTAINERS: add myself for Motion Estimation and Interpolation filters

2016-08-26 Thread Davinder Singh
patch attached.


0001-MAINTAINER-add-myself-for-Motion-Estimation-and-Inte.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Davinder Singh
On Thu, Aug 25, 2016 at 5:01 AM Andy Furniss  wrote:

>
>
> I am testing with a somewhat artificial sample in that it's a framerate
> de-interlace + scale down of a 1080i master, though it is "real" in the
> sense that I may want to repair similar files where people have produced
> a juddery mess by using yadif=0.
>

thanks for testing.

>
> It's very fast and my old (2010 Panasonic plasma) TV can't interpolate
> it without artifacting in a few places, it can interpolate a field rate
> version flawlessly and both mcfps and minterpolate do a lot better with
> a 50fps master version -> 100fps, though they are still not perfect.
>
> As well as being fast it has overlays of varying opacity and some
> repeating patterns just to make things even harder.
>
> Some observations while trying to get the best result - given the number
> of options only a small subset could be tested:
>
> aobmc vs ombc, vsbmc 0 or 1 = no real difference.
>

now our main focus will be on "better" motion estimation that removes
artifacts in fast motion, rather than little tweaks like these.


> Any me method other than epzs had far too many artifacts to be used.
>
> Raising search_param to 48 or 64 or 128 just causes new artifacts.
>

that hopefully could be fixed. working on it.


>
> Reducing mb_size causes new artifacts.
>

yes for higher resolution. for very smaller, it could be essential.


> bilat vs bidir - similar but bilat has some artifacts on a still shot
> near the end of the defaults sample uploaded. bidir sometimes has green
> near the top of the screen.
>

i see that green line in other samples too. investigating.


>
> There are of course many small artifacts, to be seen by slowmo/framestep
> for both minterpolate and mcfps. Viewing fullspeed mcfps artifacts less
> on the car when it touches the edges than minterpolate. Frame stepping
> shows mcfps doesn't blend/blur as much on really fast moving background
> as minterpolate does.
>
> Included in the link below (which is a tar to stop google drive making
> terrible low quality/fps previews) are the 25fps master file, mcfps
> interpolation to 50fps, minterpolate with default options and
> minterpolate with defaults + bidir.
>
>
> https://drive.google.com/file/d/0BxP5-S1t9VEEM2VrTzlVdGZURVk/view?usp=sharing


thanks :)

DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Davinder Singh
On Thu, Aug 25, 2016 at 8:03 PM Michael Niedermayer 
wrote:

> [...]
>
> why do these not try predictors like epzs / umh ?
> i guess some paper doesnt say exlpicitly it should be done
> but really it should be done for all predictive zonal searches IMO
>

this should be in different patch. no?
yeah, the paper doesn't specify use of predictors. i thought DS and HEX are
just new efficient patterns.


>
> [...]
> AVOption is not compatible with general enums, as C does not gurantee
> them to be stored in an int, it just happens to work on most platforms
>
> [...]
> with this style of smoothness cost you likely want to make an exception
> for the 0,0 vector (giving it the same "penalty" as the median or even
> very slightly less)
> This would normally be implemented by not adding the penalty on
> the 0,0 perdictor check but as its implemented in the compare function
> itself it would need a check
> i think it would slighty improve quality. Of course if it does not then
> ignore this suggestion
>
> also i will apply this patchset once the issues raised here are fixed
> if noone objects, its much easier and more efficient to work in main
> git than on top of a growing patch
>
> Thanks
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Republics decline into democracies and democracies degenerate into
> despotisms. -- Aristotle
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


0001-added-motion-estimation-and-interpolation-filters-v5F.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-23 Thread Davinder Singh
On Tue, Aug 23, 2016 at 5:38 AM Andy Furniss  wrote:

> [...]
>
> Nice I can see the edges are better than the last version.
>
> The doc/filters.texi hunk doesn't apply to git master.
>
> I was going to post some comparisons with mcfps tonight, but I'll need
> to redo them to see what's changed.


fixed docs conflict.

thanks for testing!


0001-added-motion-estimation-and-interpolation-filters.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-22 Thread Davinder Singh
On Wed, Jun 1, 2016 at 4:13 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> [...]
>

final patch attached. please review.

this includes bug fixes and various other improvements. also added filter
docs.


0001-added-motion-estimation-and-interpolation-filters-v3F.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Davinder Singh
On Sat, Aug 20, 2016 at 5:45 PM Michael Niedermayer 
wrote:

> how does it perform with matrixbench instead of BBB ?
>
> as reference 100fps matrixbench generated with mcfps
> from https://github.com/michaelni/FFmpeg/tree/mcfps
> ./ffmpeg -i matrixbench_mpeg2.mpg -vf 'mcfps=3:100,setpts=4*PTS'
> output for easy vissual comparission:
> http://ffmpeg.org/~michael/matrix100.avi


have a look: http://www.mediafire.com/?sssw8tqj5kn3vbk
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Davinder Singh
On Fri, Aug 19, 2016 at 7:59 PM Robert Krüger 
wrote:

> [...]

Impressive results, great job!

thanks :)

>
> I just tried  minterpolate=fps=250:mc_mode=aobmc:me=epzs and did have some
> artefacts in one of my slowmo samples but overall the quality is very, very
> nice! If you're interested in more samples or in more testing, let me know.
>

search_param 32 (default) works best for me for 720p videos. for 1080p
higher can be better, it reduce artifiacts in fast motion. for low end
(480p) p=16 works fine. you can also try bidir me_mode.

>
> Is the command line I used the one best for reducing artefacts or are there
> options known to be better in terms of artefact reduction?
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Davinder Singh
On Fri, Aug 19, 2016 at 3:27 AM Paul B Mahol <one...@gmail.com> wrote:

> On 8/18/16, Paul B Mahol <one...@gmail.com> wrote:
> > On 8/18/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> >> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol <one...@gmail.com> wrote:
> >>
> >>> [...]
> >>>
> >>
> >> i tried to modify EPZS. i removed the early termination threshold which
> >> skip some predictors :-/
> >> new score:
> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
> >>
> >> original epzs:
> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
> >>
> >> epzs uses small diamond pattern. a new pattern could also help.
> >>
> >> Please post patch like last time.
> >>>
> >>
> >> latest patch attached.
> >>
> >
> > UMH ME is still somehow buggy.
> >
> > EPZS seems good, great work!
>

what epzs did that i couldn't be able to do with umh is, it fixed lot of
artifacts that require bigger search window. if i increase search param
with umh it increase the artifacts. same happen with esa.
i guess umh uses less predictors but a better search pattern. if we combine
both epzs and uhm, it should increase the quality further.


> Actually after second look EPZS is not much better than UMH here.
>

please give me link to the video that you tested.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Davinder Singh
On Fri, Aug 19, 2016 at 1:50 AM Moritz Barsnick <barsn...@gmx.net> wrote:

> On Thu, Aug 18, 2016 at 19:26:39 +, Davinder Singh wrote:
>
> > +@table @option
> > +@item algo
> > +Set the algorithm to be used. Accepts one of the following values:
> > +
> > +@table @samp
> > +@item ebma
> > +Exhaustive block matching algorithm.
> > +@end table
> > +Default value is @samp{ebma}.
> [...]
> > +{ "method", "specify motion estimation method", OFFSET(method),
> AV_OPT_TYPE_INT, {.i64 = ME_METHOD_ESA}, ME_METHOD_ESA, ME_METHOD_UMH,
> FLAGS, "method" },
> > +CONST("esa",   "exhaustive search",
> ME_METHOD_ESA,   "method"),
> > +CONST("tss",   "three step search",
> ME_METHOD_TSS,   "method"),
> > +CONST("tdls",  "two dimensional logarithmic search",
> ME_METHOD_TDLS,  "method"),
> > +CONST("ntss",  "new three step search",
> ME_METHOD_NTSS,  "method"),
> > +CONST("fss",   "four step search",
>  ME_METHOD_FSS,   "method"),
> > +CONST("ds","diamond search",
>  ME_METHOD_DS,"method"),
> > +CONST("hexbs", "hexagon-based search",
>  ME_METHOD_HEXBS, "method"),
> > +CONST("epzs",  "enhanced predictive zonal search",
>  ME_METHOD_EPZS,  "method"),
> > +CONST("umh",   "uneven multi-hexagon search",
> ME_METHOD_UMH,   "method"),
>
> Documentation mismatches implementation. I think you forgot to adapt
> the former to your modifications.
> a) It's not "algo", it's "method".
> b) Default is "esa", not the non-existent "ebma".
> c) You should actually list all possible values.
>
> Furthermore, documentation for minterpolate is missing.
>

docs are yet to be updated.


> > +#define COST_MV(x, y)\
> > +{\
> > +cost = me_ctx->get_cost(me_ctx, x_mb, y_mb, x, y);\
> > +if (cost < cost_min) {\
> > +cost_min = cost;\
> > +mv[0] = x;\
> > +mv[1] = y;\
> > +}\
> > +}
>
> The recommendation for function macros is to wrap the definition into a
> "do { } while (0)". You do do that in other places.
>

will do.


>
> > +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
>
> Why not
>if (cost_min != me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb))
> ??
>
> > +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> > +return cost_min;
>
> Same here and many other places. "!=" is a valid operator. ;)
>

yes, that would be in case of == operator, not = operator, no?


> > +#if 1
> > +for (i = 0; i < 8; i++)
> > +COST_P_MV(x + dia[i][0], y + dia[i][1]);
> > +#else
>
> These checks will disappear in the final version?
>
>
yes.


>
> > +{ "fps", "specify the frame rate", OFFSET(frame_rate),
> AV_OPT_TYPE_RATIONAL, {.dbl = 60}, 0, INT_MAX, FLAGS },
>
> Could you handle this with an AV_OPT_TYPE_VIDEO_RATE, made specially
> for cases such as this?
>

ok, will look into it.


>
> > +{ "mb_size", "specify the macroblock size", OFFSET(mb_size),
> AV_OPT_TYPE_INT, {.i64 = 16}, 4, 16, FLAGS },
> > +{ "search_param", "specify search parameter", OFFSET(search_param),
> AV_OPT_TYPE_INT, {.i64 = 32}, 4, INT_MAX, FLAGS },
>
> You can drop the "specify the" part. Every option lets you specify
> something. ;-)
>

sure. i thought of doing that while updating docs.


>
> > +//int term = (mv_x * mv_x + mv_y * mv_y);
> > +//int term = (FFABS(mv_x - me_ctx->pred_x) + FFABS(mv_y -
> me_ctx->pred_y));
> > +//fprintf(stdout, "sbad: %llu, term: %d\n", sbad, term);
> > +return sbad;// + term;
>
> Needs to be fixed?


> > +avcodec_get_chroma_sub_sample(inlink->format,
> _ctx->chroma_h_shift, _ctx->chroma_v_shift); //TODO remove
>
> To do.
>
> > +if (!(mi_ctx->int_blocks =
> av_mallocz_array(mi_ctx->b_count, sizeof(Block
>
> !=
>
> > +if (mi_ctx->me_method == ME_METHOD_ESA)
> > +ff_me_search_esa(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_TSS)
> > +ff_me_search_tss(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_m

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Davinder Singh
On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:

> [...]
>

i tried to modify EPZS. i removed the early termination threshold which
skip some predictors :-/
new score:
$ tiny_psnr 60_source_2.yuv 60_bbb.yuv
stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200

original epzs:
$ tiny_psnr 60_source_2.yuv 60_bbb.yuv
stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200

epzs uses small diamond pattern. a new pattern could also help.

Please post patch like last time.
>

latest patch attached.


0001-motion-estimation-and-interpolation-filters-v2T.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Davinder Singh
On Tue, Aug 16, 2016 at 1:47 AM Paul B Mahol  wrote:

> [...]


hi,

made EPZS work correctly:
https://github.com/dsmudhar/FFmpeg/commit/0fc7a5490252a7f9832775b2773b35a42025553b
also reduced no of repeated predictors which increased the speed also.


> What about artifacts with UMH?
> See for example this sample:
> https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m


EPZS fixed artifacts in this video, and also in other videos where motion
is fast, I can use p = 32 and quality was improved without introducing more
artifacts as in UMH.

thanks :)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-16 Thread Davinder Singh
On Tue, Aug 16, 2016 at 5:46 PM Michael Niedermayer 
wrote:

> [...]
>
> not sure i suggested it previously already but you can add yourself
> to the MAINTAINERs file if you want to maintain / continue working on
> the code after GSoC
>

i surely will.

thanks
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Tue, Aug 16, 2016, 1:47 AM Paul B Mahol <one...@gmail.com> wrote:

> On 8/15/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> > On Tue, Aug 16, 2016 at 1:40 AM Davinder Singh <ds.mud...@gmail.com>
> wrote:
> >
> >> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol <one...@gmail.com> wrote:
> >>
> >>> [...]
> >>
> >>
> >>> Also, why is there no code for scene change detection?
> >>> If scene changes abruptly it will give bad frame.
> >>>
> >>
> >> added scene change detection from framerate filter:
> >>
> >>
> https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78
> >>
> >> On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer
> >> <mich...@niedermayer.cc> wrote:
> >>
> >>> [...]
> >>
> >>
> >>> the motion estimation should already produce a "matching score" of some
> >>> kind for every block, its sum is probably a good indication how
> >>> similar frames are
> >>> the sum probably would need to be compared to some meassure of variance
> >>> for the frame so near black frames dont get better matches
> >>> a bit like a correlation coefficient
> >>> you can also look at
> >>> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
> >>>
> >>
> >> i also tested comparing sum of SBAD score but it gave me mostly false
> >> detection.
> >> vf_framerate one works even with dark scenes (i reduced threshold from 7
> >> to 5) correctly, though it doesn't consider any motion.
> >>
> >
> > i currently duplicate the frames for one loop of interpolations (until
> next
> > frame arrives), blending can also be done.
> >
> https://github.com/dsmudhar/FFmpeg/blob/1ad01c530569dfa1f085a31b6435597a97001a78/libavfilter/vf_minterpolate.c#L1101
> > which one you think would be better? frame dup seems perfect to me
>
> What about artifacts with UMH?
> See for example this sample:
> https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m


Trying to improve the quality of frames. The "smoothness" term, suggested
by Michael, should reduce the artifacts.

>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Tue, Aug 16, 2016 at 1:40 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol <one...@gmail.com> wrote:
>
>> [...]
>
>
>> Also, why is there no code for scene change detection?
>> If scene changes abruptly it will give bad frame.
>>
>
> added scene change detection from framerate filter:
>
> https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78
>
> On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer
> <mich...@niedermayer.cc> wrote:
>
>> [...]
>
>
>> the motion estimation should already produce a "matching score" of some
>> kind for every block, its sum is probably a good indication how
>> similar frames are
>> the sum probably would need to be compared to some meassure of variance
>> for the frame so near black frames dont get better matches
>> a bit like a correlation coefficient
>> you can also look at
>> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
>>
>
> i also tested comparing sum of SBAD score but it gave me mostly false
> detection.
> vf_framerate one works even with dark scenes (i reduced threshold from 7
> to 5) correctly, though it doesn't consider any motion.
>

i currently duplicate the frames for one loop of interpolations (until next
frame arrives), blending can also be done.
https://github.com/dsmudhar/FFmpeg/blob/1ad01c530569dfa1f085a31b6435597a97001a78/libavfilter/vf_minterpolate.c#L1101
which one you think would be better? frame dup seems perfect to me
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:

> [...]
> Also, why is there no code for scene change detection?
> If scene changes abruptly it will give bad frame.
>

added scene change detection from framerate filter:
https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78

On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer 
wrote:

> [...]
> the motion estimation should already produce a "matching score" of some
> kind for every block, its sum is probably a good indication how
> similar frames are
> the sum probably would need to be compared to some meassure of variance
> for the frame so near black frames dont get better matches
> a bit like a correlation coefficient
> you can also look at
> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
>

i also tested comparing sum of SBAD score but it gave me mostly false
detection.
vf_framerate one works even with dark scenes (i reduced threshold from 7 to
5) correctly, though it doesn't consider any motion.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Davinder Singh
On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol <one...@gmail.com> wrote:

> On 8/13/16, Paul B Mahol <one...@gmail.com> wrote:
> > On 8/13/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> >>
> >> patch attached.
> >>
> >
> > Please add EPZS to minterpolate.
> >
>
> Also, why is there no code for scene change detection?

If scene changes abruptly it will give bad frame.
>

none of paper had scene change detection. any idea how can i add it?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Davinder Singh
On Sat, Aug 13, 2016 at 7:28 PM Paul B Mahol <one...@gmail.com> wrote:

> On 8/13/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> >
> > patch attached.
> >
>
> Please add EPZS to minterpolate.
>

added.
https://github.com/dsmudhar/FFmpeg/commit/1ad40c3f405625075b93dde71a749593dc64f0e3


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-11 Thread Davinder Singh
On Thu, Aug 11, 2016 at 9:09 PM Paul B Mahol <one...@gmail.com> wrote:

> On 8/10/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> > On Mon, Jul 25, 2016 at 9:35 AM Davinder Singh <ds.mud...@gmail.com>
> wrote:
> >
> >> https://github.com/dsmudhar/FFmpeg/commits/dev
> >>
> >> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
> >> (which uses bilateral motion estimation) v/s motion vectors exported by
> >> mEstimate filter:
> >>
> >> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
> >> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
> >>
> >> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
> >> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200
> >>
> >> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
> >>
> >> Compared to simple OBMC which over-smooth edges, Objects clustering and
> >> Adaptive OBMC makes the edges crisp but also introduce blocking
> artifacts
> >> where MVs are bad (with default search window = 7). But I think it’s
> ESA’s
> >> fault. The paper doesn’t specify which motion estimation method they
> used;
> >> I have been using ESA. I think quality can be further improved with
> EPZS,
> >> which I'm going to implement.
> >>
> >> I also tried to tweak VS-BMC (Variable size block motion compensation)
> >> which reduced the blocking artifacts in VS-BMC area. Had to do
> experiments
> >> a lot, more to be done.
> >>
> >> mEstimate filter (ESA) + Simple OBMC:
> >> http://www.mediafire.com/?3b8j1zj1lsuw979
> >> Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu
> >>
> >>
> >> Regards,
> >> DSM_
> >>
> >
> >
> >
> > implemented all other modern fast ME algorithms:
> > https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_mestimate.c
>
> Could you please squash your commits and attach patches that add
> vf_mestimate
> and vf_minterpolate filters?
>

will send very soon.


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-10 Thread Davinder Singh
On Mon, Jul 25, 2016 at 9:35 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> https://github.com/dsmudhar/FFmpeg/commits/dev
>
> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
> (which uses bilateral motion estimation) v/s motion vectors exported by
> mEstimate filter:
>
> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
>
> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200
>
> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
>
> Compared to simple OBMC which over-smooth edges, Objects clustering and
> Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
> where MVs are bad (with default search window = 7). But I think it’s ESA’s
> fault. The paper doesn’t specify which motion estimation method they used;
> I have been using ESA. I think quality can be further improved with EPZS,
> which I'm going to implement.
>
> I also tried to tweak VS-BMC (Variable size block motion compensation)
> which reduced the blocking artifacts in VS-BMC area. Had to do experiments
> a lot, more to be done.
>
> mEstimate filter (ESA) + Simple OBMC:
> http://www.mediafire.com/?3b8j1zj1lsuw979
> Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu
>
>
> Regards,
> DSM_
>



implemented all other modern fast ME algorithms:
https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_mestimate.c

quality is further improved with UMH which uses prediction [1]:
$ ../../../tiny_psnr 60_source_2.yuv 60_wtf.yuv
stddev: 1.05 PSNR: 47.65 MAXDIFF: 178 bytes:476928000/474163200
(search window = 18)

only problem is when the motion is too fast in some movie scenes (e.g. far
objects in background when camera is rotating) and bigger than the search
window, there will be artifacts.

good thing with predictive UMH search (compared to ESA) is we can use
bigger search window; with P = around 20, it removed all those artifacts
for which the search window wasn't large enough.

but using too big search window reduces the quality.


here's another idea: dynamic block size selection for MC-FRUC
since it's not video encoding, using 16x16 block with fixed search window
may not work same for all resolution videos. what if we automatic resize
block depending on resolution? like if 16x16, P=20 works fine for 1280x720
video, we can scale it according to width, e.g for 1920x1080 which 1.5x
1280, we use 24x24 block and also scale P accordingly? i haven't tested it
yet though.

[1]: JVT-F017.pdf by Z Chen <http://akuvian.org/src/x264/JVT-F017.pdf.gz>


DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-27 Thread Davinder Singh
On Wed, Jul 27, 2016 at 4:50 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Tue, Jul 26, 2016 at 07:30:14PM +, Davinder Singh wrote:
> > hi
> >
> > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje <rsbul...@gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > > <mich...@niedermayer.cc
> > > > wrote:
> > >
> > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > >
> > >
> > > So, correct me if I'm wrong, but it seems the complete ME code
> currently
> > > lives inside the filter. I wonder if that is the best way forward. I
> > > thought the idea was to split out the ME code into its own module and
> share
> > > it between various filters and the relevant encoders without a strict
> > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > anything like that?
> > >
> > > Ronald
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> >
> > The code is almost ready to be shared, I just didn't move that yet. That
> > makes changes difficult. mInterpolate will use those functions (which are
> > currently in mEstimate) to find true motion. My plan is to move that code
> > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> shared
> > between multiple filters. Since that is general ME, I think it can be
> used
> > with encoding (with some changes). So, should I move it to libavutil
> > instead?
>
> one thing thats important,
> independant of where its moved, the interface between libs is part
> of the public ABI of that lib and thus cannot be changed once it is
> added. That is new functions can be added but they
> cannot be removed nor their interface changed once added until the
> next major version bump (which might occur once a year)
>
> its important to keep this in mind when designing inter lib interfaces


 I'll keep this in mind.



On Wed, Jul 27, 2016 at 6:12 PM Ronald S. Bultje <rsbul...@gmail.com> wrote:

> Hi,
>
> [...]
>
>
> You could - to address this - design it as if it lived in libavutil, but
> (until it actually is used in libavcodec) keep it in libavfilter with a ff_
> function prefix to ensure functions are not exported from the lib.
>
> Once libavcodec uses it, move it to libavutil and change ff_ to av(priv?)_
> prefix so it's exported.


I was thinking something like that! Will do.

Thanks!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-26 Thread Davinder Singh
On Wed, Jul 27, 2016 at 1:06 AM Ronald S. Bultje <rsbul...@gmail.com> wrote:

> Hi,
>
> On Tue, Jul 26, 2016 at 3:30 PM, Davinder Singh <ds.mud...@gmail.com>
> wrote:
>
> > hi
> >
> > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje <rsbul...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > > <mich...@niedermayer.cc
> > > > wrote:
> > >
> > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > >
> > >
> > > So, correct me if I'm wrong, but it seems the complete ME code
> currently
> > > lives inside the filter. I wonder if that is the best way forward. I
> > > thought the idea was to split out the ME code into its own module and
> > share
> > > it between various filters and the relevant encoders without a strict
> > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > anything like that?
> > >
> > > Ronald
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> >
> > The code is almost ready to be shared, I just didn't move that yet. That
> > makes changes difficult. mInterpolate will use those functions (which are
> > currently in mEstimate) to find true motion. My plan is to move that code
> > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> shared
> > between multiple filters. Since that is general ME, I think it can be
> used
> > with encoding (with some changes). So, should I move it to libavutil
> > instead?
>
>
> I have no strong opinion on where it lives, I'd say libavcodec since we
> already have some lavfilters depending on lavcodec, but if you prefer
> lavutil that's fine also. As long as the code itself is shared in the final
> product, it's good with me.
>

Alright, I'll go with libavutil if that's okay with everyone.

Thanks!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-26 Thread Davinder Singh
hi

On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje <rsbul...@gmail.com> wrote:

> Hi,
>
> On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> <mich...@niedermayer.cc
> > wrote:
>
> > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > https://github.com/dsmudhar/FFmpeg/commits/dev
>
>
> So, correct me if I'm wrong, but it seems the complete ME code currently
> lives inside the filter. I wonder if that is the best way forward. I
> thought the idea was to split out the ME code into its own module and share
> it between various filters and the relevant encoders without a strict
> dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> anything like that?
>
> Ronald
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


The code is almost ready to be shared, I just didn't move that yet. That
makes changes difficult. mInterpolate will use those functions (which are
currently in mEstimate) to find true motion. My plan is to move that code
out of mEstimate to say, libavfilter/motion_estimation.c and can be shared
between multiple filters. Since that is general ME, I think it can be used
with encoding (with some changes). So, should I move it to libavutil
instead?


DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-24 Thread Davinder Singh
https://github.com/dsmudhar/FFmpeg/commits/dev

The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
(which uses bilateral motion estimation) v/s motion vectors exported by
mEstimate filter:

$ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200

$ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200

Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5

Compared to simple OBMC which over-smooth edges, Objects clustering and
Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
where MVs are bad (with default search window = 7). But I think it’s ESA’s
fault. The paper doesn’t specify which motion estimation method they used;
I have been using ESA. I think quality can be further improved with EPZS,
which I'm going to implement.

I also tried to tweak VS-BMC (Variable size block motion compensation)
which reduced the blocking artifacts in VS-BMC area. Had to do experiments
a lot, more to be done.

mEstimate filter (ESA) + Simple OBMC:
http://www.mediafire.com/?3b8j1zj1lsuw979
Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu


Regards,
DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-22 Thread Davinder Singh
On Mon, Jun 20, 2016 at 4:33 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Mon, Jun 20, 2016 at 09:54:15AM +, Davinder Singh wrote:
> > On Sat, Jun 18, 2016 at 3:16 AM Michael Niedermayer
> <mich...@niedermayer.cc>
> > wrote:
> >
> > > On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> > > [...]
> > > > Yes, I did that, after understanding it completely. It now works
> with the
> > > > motion vectors generated by mEstimate filter. Now I’m trying to
> improve
> > > it
> > > > based on this paper: Overlapped Block Motion Compensation: An
> > > > Estimation-Theoretic Approach
> > >
> > > > <
> > >
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8359=rep1=pdf
> > > >
> > >
> > > this is 22 years old
> > >
> > >
> > > > and
> > > > this one: Window Motion Compensation
> > > > <https://www.researchgate.net/publication/252182199>.Takes a lot of
> time
> > >
> > > this is 25 years old
> > >
> > > not saying old papers are bad, just that this represents the knowledge
> > > of 20 years ago
> > >
> > > also its important to keep in mind that blind block matching of any
> > > metric will not be enough. To find true motion the whole motion
> > > vector fields of multiple frames will need to be considered
> > >
> > > For example a ball thrown accross the field of view entering and
> > > exiting the picture needs to move smoothly and at the ends (in time)
> > > there are frames without the ball then a frame with the ball
> > > these 2 are not enough to interpolate the frames between as we have
> > > just one location where the ball is. With the next frames though
> > > we can find the motion trajectory of the ball and interpolate it end
> > > to end
> > >
> > > I think papers which work on problems like this and also interpolation
> > > of all the areas that end up overlapping and covering each other
> > > like the backgroud behind the ball in that example would be better
> > > starting points for implementing motion estiation because ultimatly
> > > that is the kind of ME code we would like to have.
> > > Block matching with various windows, OBMC, ... are all good but
> > > if in our example the vectors for the ball or background are off that
> > > will look rather bad with any motion compensation
> > > So trying to move a bit toward this would make sense but first
> > > having some motion estimation even really basic and dumb with
> > > mc working in a testable filter (pair) should probably be done.
> > > Iam just mentioning this as a bit of a preview of what i hope could
> > > eventually be implemented, maybe this would be after GSoC but its
> > > the kind of code needed to have really usable frame interpolation
> > >
> > >
> > >
> > > > reading them. I think we need to add new Raised Cosine window
> (weights)
> > > > along with Linear Window (currently implemented). What do you say?
> > >
> > > i dont know, the windows used in snow are already the best of several
> > > tried (for snow).
> > > no great gains will be found by changing the OBMC window from snow.
> > >
> > >
> > > >
> > > > Also making mInterpolate work with variable macroblock size MC. The
> > > current
> > > > interpolation works without half pel accuracy, though.
> > >
> > > mcfps has fully working 1/4 pel OBMC code, that should be fine to be
> > > used as is i think unless i miss something
> > >
> > > half pel is 20 years old, it is not usefull
> > > multiple block sizes on the MC side should not really matter ATM
> > > smaller blocks are a bit slower but first we should get the code
> > > working, then working with good quality and then working fast.
> > >
> > > multiple block sizes may be usefull for the estimation side if it
> > > improves estimation somehow.
> > >
> > > Can i see your current "work in progress" ?
> > >
> > >
> > > [...]
> > > > I’m moving estimation code to some new file motion_est.c file and the
> > > > methods are shared by both mEstimate and mInterpolate filters.
> mEstimate
> > > > store the MVs in frame’s side data for any other filter. Moreover,
> any
> > > > other filter if need post proces

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-20 Thread Davinder Singh
On Sat, Jun 18, 2016 at 3:16 AM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> [...]
> > Yes, I did that, after understanding it completely. It now works with the
> > motion vectors generated by mEstimate filter. Now I’m trying to improve
> it
> > based on this paper: Overlapped Block Motion Compensation: An
> > Estimation-Theoretic Approach
>
> > <
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8359=rep1=pdf
> >
>
> this is 22 years old
>
>
> > and
> > this one: Window Motion Compensation
> > <https://www.researchgate.net/publication/252182199>.Takes a lot of time
>
> this is 25 years old
>
> not saying old papers are bad, just that this represents the knowledge
> of 20 years ago
>
> also its important to keep in mind that blind block matching of any
> metric will not be enough. To find true motion the whole motion
> vector fields of multiple frames will need to be considered
>
> For example a ball thrown accross the field of view entering and
> exiting the picture needs to move smoothly and at the ends (in time)
> there are frames without the ball then a frame with the ball
> these 2 are not enough to interpolate the frames between as we have
> just one location where the ball is. With the next frames though
> we can find the motion trajectory of the ball and interpolate it end
> to end
>
> I think papers which work on problems like this and also interpolation
> of all the areas that end up overlapping and covering each other
> like the backgroud behind the ball in that example would be better
> starting points for implementing motion estiation because ultimatly
> that is the kind of ME code we would like to have.
> Block matching with various windows, OBMC, ... are all good but
> if in our example the vectors for the ball or background are off that
> will look rather bad with any motion compensation
> So trying to move a bit toward this would make sense but first
> having some motion estimation even really basic and dumb with
> mc working in a testable filter (pair) should probably be done.
> Iam just mentioning this as a bit of a preview of what i hope could
> eventually be implemented, maybe this would be after GSoC but its
> the kind of code needed to have really usable frame interpolation
>
>
>
> > reading them. I think we need to add new Raised Cosine window (weights)
> > along with Linear Window (currently implemented). What do you say?
>
> i dont know, the windows used in snow are already the best of several
> tried (for snow).
> no great gains will be found by changing the OBMC window from snow.
>
>
> >
> > Also making mInterpolate work with variable macroblock size MC. The
> current
> > interpolation works without half pel accuracy, though.
>
> mcfps has fully working 1/4 pel OBMC code, that should be fine to be
> used as is i think unless i miss something
>
> half pel is 20 years old, it is not usefull
> multiple block sizes on the MC side should not really matter ATM
> smaller blocks are a bit slower but first we should get the code
> working, then working with good quality and then working fast.
>
> multiple block sizes may be usefull for the estimation side if it
> improves estimation somehow.
>
> Can i see your current "work in progress" ?
>
>
> [...]
> > I’m moving estimation code to some new file motion_est.c file and the
> > methods are shared by both mEstimate and mInterpolate filters. mEstimate
> > store the MVs in frame’s side data for any other filter. Moreover, any
> > other filter if need post processing on MVs it can directly use the
> shared
> > methods. But, mInterpolate use them internally, no saving in sidedata,
> and
> > saving unnecessary processing.
>
> This design sounds good
>
>
> >
> >
> > Also, Paper [1] doesn’t uses window with OBMC at all. It just find normal
> > average without weight. Perhaps to compare papers I either need to add
> > multiple option for each setting or need to assign the algorithm as
> > researcher’s name in filter options.
>
>
>
Paper [1] and [2] uses functions or do post processing on motion vectors,
so needs fast ME algorithms, which currently I’m working on. [*M]

Let me summarize the papers (from Email 1, this thread):

Paper [1]: Zhai et al. (2005) A Low Complexity Motion Compensated Frame
Interpolation Method

[Quote]
This paper propose a MCFI method intended for real time processing. It
first examines the motion vectors in the bitstream [*1]. 8x8 block size is
used rather than 16x16 as in most cases; Using smaller block size leads to
denser mot

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-17 Thread Davinder Singh
On Wed, Jun 15, 2016 at 5:04 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> Hi
>
> On Tue, May 31, 2016 at 10:43:38PM +, Davinder Singh wrote:
> > There’s a lot of research done on Motion Estimation. Depending upon the
> > intended application of the resultant motion vectors, the method used for
> > motion estimation can be very different.
> >
> > Classification of Motion Estimation Methods:
> >
> > Direct Methods: In direct methods we calculate optical flow
> > <https://en.wikipedia.org/wiki/Optical_flow> in the scene.
> >
> > - Phase Correlation
> >
> > - Block Matching
> >
> > - Spatio-Temporal Gradient
> >
> >  - Optical flow: Uses optical flow equation to find motion in the scene.
> >
> >  - Pel-recursive: Also compute optical flow, but in such a way that allow
> > recursive computability on vector fields)
> >
> > Indirect Methods
> >
> > - Feature based Method: Find features in the frame, and used for
> estimation.
> >
> > Here are some papers on Frame Rate Up-Conversion (FRUC):
> >
> > Phase Correlation:
> >
> > This method relies on frequency-domain representation of data, calculated
> > using fast Fourier transform.
> > <https://en.wikipedia.org/wiki/Fast_Fourier_transform> Phase Correlation
> > provides a correlation surface from the comparison of images. This
> enables
> > the identification of motion on a pixel-by-pixel basis for correct
> > processing of each motion type. Since phase correlation operates in the
> > frequency rather than the spatial domain, it is able to zero in on
> details
> > while ignoring such factors as noise and grain within the picture. In
> other
> > words, the system is highly tolerant of the noise variations and rapid
> > changes in luminance levels that are found in many types of content –
> > resulting in high-quality performance on fades, objects moving in and out
> > of shade, and light ashes.
> >
> > Papers:
> >
> > [1] "Disney Research » Phase-Based Frame Interpolation for Video." IEEE
> > CVPR 2015 <https://www.disneyresearch.com/publication/phasebased/>
> >
> > [2] Yoo, DongGon et al. "Phase Correlated Bilateral Motion Estimation for
> > Frame Rate Up-Conversion." The 23rd International Technical Conference on
> > Circuits/Systems, Computers and Communications (ITC-CSCC Jul. 2008.
> >
> > <http://www.ieice.org/proceedings/ITC-CSCC2008/pdf/p385_G3-4.pdf>
> >
> > The video on paper [1] page demonstrate comparison between various
> methods.
> >
> > Optical Flow:
> >
> > http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf
> >
> > [3] Brox et al. "High accuracy optical flow estimation based on a theory
> > for warping." Computer Vision - ECCV 2004: 25-36.
> >
> > <
> >
> http://www.wisdom.weizmann.ac.il/~/vision/courses/2006_2/papers/optic_flow_multigrid/brox_eccv04_of.pdf
> > >
> >
> > Slowmovideo <http://slowmovideo.granjow.net/> open-source project is
> based
> > on Optical flow equation.
> >
> > Algorithm we can implement is based on block matching method.
> >
> > Motion Compensated Frame Interpolation
> >
> > Paper:
> >
> > [4] Zhai et al. "A low complexity motion compensated frame interpolation
> > method." IEEE ISCAS 2005: 4927-4930.
> >
> > <http://research.microsoft.com/pubs/69174/lowcomplexitymc.pdf>
> >
> > Block-based motion estimation and pixel-wise motion estimation are the
> two
> > main categories of motion estimation methods. In general, pixel-wise
> motion
> > estimation can attain accurate motion fields, but needs a substantial
> > amount of computation. In contrast, block matching algorithms (BMA) can
> be
> > efficiently implemented and provide good performance.
> >
> > Most MCFI algorithms utilize the block-matching algorithm (BMA) for
> motion
> > estimation (ME). BMA is simple and easy to implement. It also generates a
> > compactly represented motion field. However, unlike video compression, it
> > is more important to find true motion trajectories in MCFI. The objective
> > of MC in MCFI is not to minimize the energy of MC residual signals, but
> to
> > reconstruct interpolated frames with better visual quality.
> >
> > The algorithm uses motion vectors which are embedded in bit-stream. If
> > vectors exported by codec (using +export_mvs flag2) are used when
> > available, computation of the motion vectors w

Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-06-09 Thread Davinder Singh
On Mon, May 30, 2016 at 1:45 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Mon, May 23, 2016 at 05:09:35PM +, Davinder Singh wrote:
> >  vf_codecview.c |   55
> +--
> >  1 file changed, 45 insertions(+), 10 deletions(-)
> > 464b23c4638d1a408e8237651facf327994945bf
> 0001-vf_codecview-added-new-options.patch
> > From 641d6f92e792ea7def3610f5462b6bbec019c4b7 Mon Sep 17 00:00:00 2001
> > From: dsmudhar <ds.mud...@gmail.com>
> > Date: Mon, 23 May 2016 22:29:51 +0530
> > Subject: [PATCH] vf_codecview: added new options
> >
> > ---
> >  libavfilter/vf_codecview.c | 55
> +-
> >  1 file changed, 45 insertions(+), 10 deletions(-)
> >
> > diff --git a/libavfilter/vf_codecview.c b/libavfilter/vf_codecview.c
> > index e70b397..1cb521d 100644
> > --- a/libavfilter/vf_codecview.c
> > +++ b/libavfilter/vf_codecview.c
> > @@ -38,21 +38,39 @@
> >  #define MV_P_FOR  (1<<0)
> >  #define MV_B_FOR  (1<<1)
> >  #define MV_B_BACK (1<<2)
> > +#define MV_TYPE_FOR  (1<<0)
> > +#define MV_TYPE_BACK (1<<1)
> > +#define FRAME_TYPE_I (1<<0)
> > +#define FRAME_TYPE_P (1<<1)
> > +#define FRAME_TYPE_B (1<<2)
> >
> >  typedef struct {
> >  const AVClass *class;
> >  unsigned mv;
> > +unsigned frame_type;
> > +unsigned mv_type;
> >  int hsub, vsub;
> >  int qp;
> >  } CodecViewContext;
> >
> >  #define OFFSET(x) offsetof(CodecViewContext, x)
> >  #define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM
> > +#define CONST(name, help, val, unit) { name, help, 0,
> AV_OPT_TYPE_CONST, {.i64=val}, 0, 0, FLAGS, unit }
> > +
> >  static const AVOption codecview_options[] = {
> >  { "mv", "set motion vectors to visualize", OFFSET(mv),
> AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS, "mv" },
> > -{"pf", "forward predicted MVs of P-frames",  0,
> AV_OPT_TYPE_CONST, {.i64 = MV_P_FOR },  INT_MIN, INT_MAX, FLAGS, "mv"},
> > -{"bf", "forward predicted MVs of B-frames",  0,
> AV_OPT_TYPE_CONST, {.i64 = MV_B_FOR },  INT_MIN, INT_MAX, FLAGS, "mv"},
> > -{"bb", "backward predicted MVs of B-frames", 0,
> AV_OPT_TYPE_CONST, {.i64 = MV_B_BACK }, INT_MIN, INT_MAX, FLAGS, "mv"},
> > +CONST("pf", "forward predicted MVs of P-frames",  MV_P_FOR,
> "mv"),
> > +CONST("bf", "forward predicted MVs of B-frames",  MV_B_FOR,
> "mv"),
> > +CONST("bb", "backward predicted MVs of B-frames", MV_B_BACK,
> "mv"),
> > +{ "mv_type", "set motion vectors type", OFFSET(mv_type),
> AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS, "mv_type" },
> > +{ "mvt", "set motion vectors type", OFFSET(mv_type),
> AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS, "mv_type" },
> > +CONST("fp", "forward predicted MVs",  MV_TYPE_FOR,  "mv_type"),
> > +CONST("bp", "backward predicted MVs", MV_TYPE_BACK, "mv_type"),
> > +{ "frame_type", "set frame types to visualize motion vectors of",
> OFFSET(frame_type), AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS,
> "frame_type" },
> > +{ "ft", "set frame types to visualize motion vectors of",
> OFFSET(frame_type), AV_OPT_TYPE_FLAGS, {.i64=0}, 0, INT_MAX, FLAGS,
> "frame_type" },
> > +CONST("if", "I-frames", FRAME_TYPE_I, "frame_type"),
> > +CONST("pf", "P-frames", FRAME_TYPE_P, "frame_type"),
> > +CONST("bf", "B-frames", FRAME_TYPE_B, "frame_type"),
> >  { "qp", NULL, OFFSET(qp), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, .flags
> = FLAGS },
>
> the new options should be added at the end, inserting them in the middle
> breaks for example
> -flags2 +export_mvs -vf codecview=0:1
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Good people do not need laws to tell them to act responsibly, while bad
> people will find a way around the laws. -- Plato
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


patch attached.


0001-vf_codecview-added-new-options.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] fix few compiler warnings

2016-06-02 Thread Davinder Singh
On Thu, Jun 2, 2016 at 5:18 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Sun, May 22, 2016 at 01:51:05AM +, Davinder Singh wrote:
> [...]
>
> >  vf_hwdownload.c |6 --
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 5eb7416fececde847414f37de9a78a4e1cd5e1af
> 0004-libavfilter-vf_hwdownload-show-error-when-ff_formats.patch
> > From d1d00989a374facba3cdf777d95c61bf385f1332 Mon Sep 17 00:00:00 2001
> > From: dsmudhar <ds.mud...@gmail.com>
> > Date: Sun, 22 May 2016 06:26:36 +0530
> > Subject: [PATCH 4/7] libavfilter/vf_hwdownload: show error when
> ff_formats_ref
> >  fails
> >
> > ---
> >  libavfilter/vf_hwdownload.c | 6 --
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/libavfilter/vf_hwdownload.c b/libavfilter/vf_hwdownload.c
> > index 2dcc9fa..79ea82d 100644
> > --- a/libavfilter/vf_hwdownload.c
> > +++ b/libavfilter/vf_hwdownload.c
> > @@ -56,8 +56,10 @@ static int hwdownload_query_formats(AVFilterContext
> *avctx)
> >  }
> >  }
> >
> > -ff_formats_ref(infmts,  >inputs[0]->out_formats);
> > -ff_formats_ref(outfmts, >outputs[0]->in_formats);
> > +if ((err = ff_formats_ref(infmts,  >inputs[0]->out_formats))
> < 0 ||
> > +(err = ff_formats_ref(outfmts, >outputs[0]->in_formats))
> < 0)
> > +return err;
>
> according to coverity this introduces a memleak
> (1362184)
> ill send you an invite so you can take a look
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Those who are too smart to engage in politics are punished by being
> governed by those who are dumber. -- Plato
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


this patch should fix it

Thanks,
DSM_


0001-vf_hwdownload-fix-memory-leak.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [GSoC] Motion Interpolation

2016-05-31 Thread Davinder Singh
There’s a lot of research done on Motion Estimation. Depending upon the
intended application of the resultant motion vectors, the method used for
motion estimation can be very different.

Classification of Motion Estimation Methods:

Direct Methods: In direct methods we calculate optical flow
 in the scene.

- Phase Correlation

- Block Matching

- Spatio-Temporal Gradient

 - Optical flow: Uses optical flow equation to find motion in the scene.

 - Pel-recursive: Also compute optical flow, but in such a way that allow
recursive computability on vector fields)

Indirect Methods

- Feature based Method: Find features in the frame, and used for estimation.

Here are some papers on Frame Rate Up-Conversion (FRUC):

Phase Correlation:

This method relies on frequency-domain representation of data, calculated
using fast Fourier transform.
 Phase Correlation
provides a correlation surface from the comparison of images. This enables
the identification of motion on a pixel-by-pixel basis for correct
processing of each motion type. Since phase correlation operates in the
frequency rather than the spatial domain, it is able to zero in on details
while ignoring such factors as noise and grain within the picture. In other
words, the system is highly tolerant of the noise variations and rapid
changes in luminance levels that are found in many types of content –
resulting in high-quality performance on fades, objects moving in and out
of shade, and light ashes.

Papers:

[1] "Disney Research » Phase-Based Frame Interpolation for Video." IEEE
CVPR 2015 

[2] Yoo, DongGon et al. "Phase Correlated Bilateral Motion Estimation for
Frame Rate Up-Conversion." The 23rd International Technical Conference on
Circuits/Systems, Computers and Communications (ITC-CSCC Jul. 2008.



The video on paper [1] page demonstrate comparison between various methods.

Optical Flow:

http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf

[3] Brox et al. "High accuracy optical flow estimation based on a theory
for warping." Computer Vision - ECCV 2004: 25-36.

<
http://www.wisdom.weizmann.ac.il/~/vision/courses/2006_2/papers/optic_flow_multigrid/brox_eccv04_of.pdf
>

Slowmovideo  open-source project is based
on Optical flow equation.

Algorithm we can implement is based on block matching method.

Motion Compensated Frame Interpolation

Paper:

[4] Zhai et al. "A low complexity motion compensated frame interpolation
method." IEEE ISCAS 2005: 4927-4930.



Block-based motion estimation and pixel-wise motion estimation are the two
main categories of motion estimation methods. In general, pixel-wise motion
estimation can attain accurate motion fields, but needs a substantial
amount of computation. In contrast, block matching algorithms (BMA) can be
efficiently implemented and provide good performance.

Most MCFI algorithms utilize the block-matching algorithm (BMA) for motion
estimation (ME). BMA is simple and easy to implement. It also generates a
compactly represented motion field. However, unlike video compression, it
is more important to find true motion trajectories in MCFI. The objective
of MC in MCFI is not to minimize the energy of MC residual signals, but to
reconstruct interpolated frames with better visual quality.

The algorithm uses motion vectors which are embedded in bit-stream. If
vectors exported by codec (using +export_mvs flag2) are used when
available, computation of the motion vectors will be significantly reduced
for realtime playback. Otherwise the mEstimate filter will generate MVs,
and to make the process faster, same algorithms (used by x264 and x265) -
Diamond, Hex, UMH, Star will be implemented in the filter. Other filter -
mInterpolate will use the MVs in the frame side data to interpolate frames
using various methods - OBMC (Overlapped block motion compensation), simple
frame blending and frame duplication etc.

However, MVs generated based on SAD or BAD might bring serious artifacts if
they are used directly. So, the algorithm first examines the motion vectors
and classify into two groups, one group with vectors which are considered
to represent “true” motion, other having “bad” vectors, then carries out
overlapped block bi-directional motion estimation on corresponding blocks
having “bad” MVs. Finally, it utilizes motion vector post-processing and
overlapped block motion compensation to generate interpolated frames and
further reduce blocking artifacts. Details on each step are in the paper
[4].

Paper 2:

[5] Choi et al. "Motion-compensated frame interpolation using bilateral
motion estimation and adaptive overlapped block motion compensation." Circuits
and Systems for Video Technology, IEEE 

Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-23 Thread Davinder Singh
On Mon, May 23, 2016 at 10:15 PM Davinder Singh <ds.mud...@gmail.com> wrote:

> On Sun, May 15, 2016 at 1:26 AM Michael Niedermayer <mich...@niedermayer.cc>
> wrote:
>
>> it would be better if the previous syntax would still work, in addition
>> to any new things
>>
>> bug reports, mailing list posts, user experience and scripts may
>> use the old syntax
>>
>>
>> [...]
>>
>>
>> --
>> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>>
>> Good people do not need laws to tell them to act responsibly, while bad
>> people will find a way around the laws. -- Plato
>> ___
>> ffmpeg-devel mailing list
>> ffmpeg-devel@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
>
> [...]
>

minor change:
removed frame type condition (... || s->frame_type) from PATCH line 62 and
81, as specifying only frame_type won't do anything.

Thanks,
DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] fix few compiler warnings

2016-05-21 Thread Davinder Singh
On Sun, May 22, 2016 at 2:09 AM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Sat, May 21, 2016 at 02:21:17PM +, Davinder Singh wrote:
> > hi,
> >
> > this patch fixes following compiler warnings:
> >
> > libavcodec/cfhd.c:346:78: warning: format specifies type 'unsigned short'
> > but the argument has type 'int' [-Wformat]
> > av_log(avctx, AV_LOG_DEBUG, "Small chunk length %"PRIu16"
> > %s\n", data * 4, tag < 0 ? "optional" : "required");
> > ~~
> >   ^~~~
> > libavcodec/cfhd.c:472:110: warning: format specifies type 'unsigned
> short'
> > but the argument has type 'int' [-Wformat]
> > av_log(avctx, AV_LOG_DEBUG, "Start of lowpass coeffs
> component
> > %"PRIu16" height:%d, width:%d\n", s->channel_num, lowpass_height,
> > lowpass_width);
> >
> >  ~~^~
> > libavcodec/cfhd.c:490:77: warning: format specifies type 'unsigned short'
> > but the argument has type 'int' [-Wformat]
> > av_log(avctx, AV_LOG_DEBUG, "Lowpass coefficients
> %"PRIu16"\n",
> > lowpass_width * lowpass_height);
> >   ~~
> >  ^~
> >
> >
> >
> > libavcodec/dv_tablegen.c:30:60: warning: format specifies type 'char' but
> > the argument has type 'uint32_t' (aka 'unsigned int') [-Wformat]
> >"{0x%"PRIx32", %"PRId8"}", data[i].vlc, data[i].size)
> > ~~~^~~~
> > libavcodec/tableprint.h:37:29: note: expanded from macro
> > 'WRITE_1D_FUNC_ARGV'
> >printf(" "fmtstr",", __VA_ARGS__);\
> > ^~~
> > libavcodec/dv_tablegen.c:30:60: warning: format specifies type 'char' but
> > the argument has type 'uint32_t' (aka 'unsigned int') [-Wformat]
> >"{0x%"PRIx32", %"PRId8"}", data[i].vlc, data[i].size)
> > ~~~^~~~
> >
> >
> >
> > libavfilter/af_hdcd.c:896:57: warning: shifting a negative signed value
> is
> > undefined [-Wshift-negative-value]
> > state->readahead = readaheadtab[bits & ~(-1 << 8)];
> >
> >
> >
> > libavfilter/vf_hwdownload.c:59:5: warning: ignoring return value of
> > function declared with warn_unused_result attribute [-Wunused-result]
> > ff_formats_ref(infmts,  >inputs[0]->out_formats);
> > ^~ ~~~
> > libavfilter/vf_hwdownload.c:60:5: warning: ignoring return value of
> > function declared with warn_unused_result attribute [-Wunused-result]
> > ff_formats_ref(outfmts, >outputs[0]->in_formats);
> > ^~ ~~~
> >
> >
> >
> > libavutil/opencl.c:456:17: warning: variable 'kernel_source' is used
> > uninitialized whenever 'for' loop exits because its condition is false
> > [-Wsometimes-uninitialized]
> > for (i = 0; i < opencl_ctx.kernel_code_count; i++) {
> > ^~~~
> > libavutil/opencl.c:466:10: note: uninitialized use occurs here
> > if (!kernel_source) {
> >  ^
> > libavutil/opencl.c:456:17: note: remove the condition if it is always
> true
> > for (i = 0; i < opencl_ctx.kernel_code_count; i++) {
> > ^~~~
> > libavutil/opencl.c:448:30: note: initialize the variable 'kernel_source'
> to
> > silence this warning
> > const char *kernel_source;
> >  ^
> >   = NULL
>
> >  libavcodec/cfhd.c   |6 +++---
> >  libavcodec/dv_tablegen.c|2 +-
> >  libavfilter/af_hdcd.c   |2 +-
> >  libavfilter/vf_hwdownload.c |6 --
> >  libavutil/opencl.c  |2 +-
>
> please split this patch
> the fixed warnings are unrelated to each other and possibly differnt
> developers would like to reply to different parts
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> I have often repented speaking, but never of holding my tongue.
> -- Xenocrates
> ___

Re: [FFmpeg-devel] [PATCH] fix few compiler warnings

2016-05-21 Thread Davinder Singh
2 more:

libavcodec/pngenc.c:274:25: warning: assigning to 'Bytef *' (aka 'unsigned
char *') from 'const uint8_t *' (aka 'const unsigned char *') discards
qualifiers
  [-Wincompatible-pointer-types-discards-qualifiers]
s->zstream.next_in  = data;
^ 



libavcodec/tscc.c:81:26: warning: assigning to 'Bytef *' (aka 'unsigned
char *') from 'const uint8_t *' (aka 'const unsigned char *') discards
qualifiers
  [-Wincompatible-pointer-types-discards-qualifiers]
c->zstream.next_in   = buf;
 ^ ~~~


0001-fixed-assignment-discards-qualifier-warnings.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-14 Thread Davinder Singh
On Fri, May 13, 2016 at 11:35 PM Davinder Singh <ds.mud...@gmail.com> wrote:

> should fix fate :)
>
*attached patch in that mail should fix fate.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-13 Thread Davinder Singh
should fix fate :)

On Wed, May 11, 2016 at 6:32 PM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Wed, May 11, 2016 at 12:41:43PM +, Davinder Singh wrote:
> > single patch
> >
> > On Sun, May 8, 2016 at 1:18 AM Davinder Singh <ds.mud...@gmail.com>
> wrote:
> >
> > > separated motion vector types (forward or backward) from frame picture
> > > types as MVs are associated with picture types only in video coding.
> > >
> > > option `mv` can have two values:
> > > forward predicted or backward predicted.
> > >
> > > option `frames` can have three values:
> > > p-frames, i-frames and b-frames.
> > >
> > > ex:
> > > only forward predicted mvs of all frames:
> > > -vf codecview=mv=fp
> > >
> > > mvs (both forward or backward predicted) of P or B-frames:
> > > -vf codecview=mv=fp+bp:frames=pf+bf
> > >
> > > Regards,
> > > DSM_
> > >
>
> >  doc/filters.texi   |   30 --
> >  libavfilter/vf_codecview.c |   36 ++--
> >  2 files changed, 50 insertions(+), 16 deletions(-)
> > 6168c73a45d4b183a4478909e4f8f3b0e47d1738
> 0001-vf_codecview-improved-filter-options.patch
> > From 0c2c258bd14d5dd58351271cc8c8859cd5edbf26 Mon Sep 17 00:00:00 2001
> > From: dsmudhar <ds.mud...@gmail.com>
> > Date: Wed, 11 May 2016 17:57:39 +0530
> > Subject: [PATCH] vf_codecview: improved filter options
>
> this breaks make fate
>
> make fate-filter-codecview-mvs
> TESTfilter-codecview-mvs
> --- ./tests/ref/fate/filter-codecview-mvs   2016-05-11
> 04:21:34.187662201 +0200
> +++ tests/data/fate/filter-codecview-mvs2016-05-11
> 14:58:43.732467592 +0200
> @@ -1,65 +0,0 @@
> -#tb 0: 32768/785647
> -#media_type 0: video
> -#codec_id 0: rawvideo
> -#dimensions 0: 576x320
> -#sar 0: 0/1
> -0,  0,  0,1,   276480, 0x5f7a0d4f
> -0,  1,  1,1,   276480, 0x5f7a0d4f
> -0,  2,  2,1,   276480, 0x5f7a0d4f
> -0,  3,  3,1,   276480, 0x5f7a0d4f
> -0,  4,  4,1,   276480, 0x5f7a0d4f
> -0,  5,  5,1,   276480, 0x5f7a0d4f
> -0,  6,  6,1,   276480, 0x5f7a0d4f
> -0,  7,  7,1,   276480, 0x5f7a0d4f
> -0,  8,  8,1,   276480, 0x5f7a0d4f
> -0,  9,  9,1,   276480, 0x5f7a0d4f
> -0, 10, 10,1,   276480, 0x5f7a0d4f
> -0, 11, 11,1,   276480, 0x5f7a0d4f
> -0, 12, 12,1,   276480, 0x5f7a0d4f
> -0, 13, 13,1,   276480, 0x5f7a0d4f
> -0, 14, 14,1,   276480, 0x5f7a0d4f
> -0, 15, 15,1,   276480, 0x5f7a0d4f
> -0, 16, 16,1,   276480, 0xc3b80edf
> -0, 17, 17,1,   276480, 0x5f7a0d4f
> -0, 18, 18,1,   276480, 0x5f7a0d4f
> -0, 19, 19,1,   276480, 0x5f7a0d4f
> -0, 20, 20,1,   276480, 0xc3b80edf
> -0, 21, 21,1,   276480, 0x5f7a0d4f
> -0, 22, 22,1,   276480, 0x5f7a0d4f
> -0, 23, 23,1,   276480, 0x5f7a0d4f
> -0, 24, 24,1,   276480, 0xc3b80edf
> -0, 25, 25,1,   276480, 0x5f7a0d4f
> -0, 26, 26,1,   276480, 0x5f7a0d4f
> -0, 27, 27,1,   276480, 0x5f7a0d4f
> -0, 28, 28,1,   276480, 0xc3b80edf
> -0, 29, 29,1,   276480, 0x5f7a0d4f
> -0, 30, 30,1,   276480, 0x5f7a0d4f
> -0, 31, 31,1,   276480, 0x5f7a0d4f
> -0, 32, 32,1,   276480, 0xc3b80edf
> -0, 33, 33,1,   276480, 0x75641594
> -0, 34, 34,1,   276480, 0x32ee3526
> -0, 35, 35,1,   276480, 0xcb53479a
> -0, 36, 36,1,   276480, 0xe1be6e26
> -0, 37, 37,1,   276480, 0x5ce39368
> -0, 38, 38,1,   276480, 0x4ec1e418
> -0, 39, 39,1,   276480, 0x23c418ae
> -0, 40, 40,1,   276480, 0x036a5515
> -0, 41, 41,1,   276480, 0x7946efbd
> -0, 42, 42,1,   276480, 0xd9aa1382
> -0, 43, 43,1,   276480, 0x3863f9c8
> -0, 44, 44,1,   276480, 0x33e47330
> -0, 45,  

Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-11 Thread Davinder Singh
single patch

On Sun, May 8, 2016 at 1:18 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> separated motion vector types (forward or backward) from frame picture
> types as MVs are associated with picture types only in video coding.
>
> option `mv` can have two values:
> forward predicted or backward predicted.
>
> option `frames` can have three values:
> p-frames, i-frames and b-frames.
>
> ex:
> only forward predicted mvs of all frames:
> -vf codecview=mv=fp
>
> mvs (both forward or backward predicted) of P or B-frames:
> -vf codecview=mv=fp+bp:frames=pf+bf
>
> Regards,
> DSM_
>


0001-vf_codecview-improved-filter-options.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-07 Thread Davinder Singh
changed default value of  `frames` to zero, if `frames` option is not set,
it ignore and check only direction, this way all picture types are
considered if we are drawing MVs for ME, since there are more frame types
available.

On Sun, May 8, 2016 at 3:02 AM Moritz Barsnick <barsn...@gmx.net> wrote:

> On Sat, May 07, 2016 at 19:48:31 +, Davinder Singh wrote:
>
> > -{ "frames", "set frame types to display MVs of", OFFSET(frames),
> AV_OPT_TYPE_FLAGS, {.i64=3}, 0, INT_MAX, FLAGS, "frames" },
> > +{ "frames", "set frame types to display MVs of", OFFSET(frames),
> AV_OPT_TYPE_FLAGS, {.i64=7}, 0, INT_MAX, FLAGS, "frames" },
>
>  ^
> This could be written symbolically to make its value more obvious.
>
> > -forward predicted MVs of P-frames
> > +predicted frames (p-frames)
> >  @item bf
> > -forward predicted MVs of B-frames
> > -@item bb
> > -backward predicted MVs of B-frames
> > +bi-directionally predicted frames (b-frames)
>
> Didn't you just introduce capitalization of B and P in the 0002 patch,
> and now drop it here?
>
> > +CONST("pf", "p-frames", FRAME_TYPE_P, "frames"),
> > +CONST("bf", "b-frames", FRAME_TYPE_B, "frames"),
>
> Same here.
>

it was in docs only. all capital now. :)


>
> Moritz
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


0003-vf_codecview-ignore-frame-types-ifnset.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH][libavfilter] codecview: improved options

2016-05-07 Thread Davinder Singh
separated motion vector types (forward or backward) from frame picture
types as MVs are associated with picture types only in video coding.

option `mv` can have two values:
forward predicted or backward predicted.

option `frames` can have three values:
p-frames, i-frames and b-frames.

ex:
only forward predicted mvs of all frames:
-vf codecview=mv=fp

mvs (both forward or backward predicted) of P or B-frames:
-vf codecview=mv=fp+bp:frames=pf+bf

Regards,
DSM_


0002-vf_codecview-added-i-frame-support.patch
Description: Binary data


0001-vf_codecview-improved-filter-options.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] set exact ref frame in AVMotionVector

2016-05-06 Thread Davinder Singh
hi,
change the confusing "source" var to "ref" of AVMotionVector.
define two constants AV_MV_REF_FRAME_PREVIOUS and AV_MV_REF_FRAME_NEXT
all changes:
https://github.com/dsmudhar/FFmpeg/commit/33b5ad805cccd7b48a6b0d643c39e8dd26f0e98b

please confirm this:
snowdec.c
Line 96: BlockNode *bn...
...
Line 112: avmv->source = -1 - bn->ref;
i searched the code, bn->ref set was >= 0
so, i set it always prev:
avmv->ref = AV_MV_REF_FRAME_PREVIOUS;
since, in current code, "source" is checked if > 0 to be future ref, and
line 112 set it always negative.

Regards,
DSM_


0001-set-exact-ref-frame-in-AVMotionVector.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-13 Thread Davinder Singh
good vectors? how can i improve them? since it search for every possible
place, it should give best match. can you give more details, why
surrounding vectors need to be considered?

also i tried to compare it with export_mvs, they seems quite similar.
the export_mvs ones, sometimes use multiple vectors at for certain blocks.


DSM_

On Tue, Apr 12, 2016 at 5:44 AM Michael Niedermayer <mich...@niedermayer.cc>
wrote:

> On Sat, Apr 09, 2016 at 11:50:07PM +, Davinder Singh wrote:
> > ok, applied the changes. with new closest vector condition. will add
> > threshold now.
>
> it may be beyond the scope of this qualification task but to get
> good quality vectors the surrounding vectors will have to be
> considered in choosing each vector
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> The bravest are surely those who have the clearest vision
> of what is before them, glory and danger alike, and yet
> notwithstanding go out to meet it. -- Thucydides
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-10 Thread Davinder Singh
here are samples:
big buck bunny: (block=16:search=16:step=2)
http://www.mediafire.com/download/iy35rcr6d66733o/output1.mp4

matrixbench: (block=16:search=7)
http://www.mediafire.com/download/ii3n9sn42bwp3sg/output_matrix.mp4

On Sun, Apr 10, 2016 at 5:20 AM Davinder Singh <ds.mud...@gmail.com> wrote:

> ok, applied the changes. with new closest vector condition. will add
> threshold now.
>
>
> On Sun, Apr 10, 2016 at 12:47 AM Paul B Mahol <one...@gmail.com> wrote:
>
>> On 4/9/16, Davinder Singh <ds.mud...@gmail.com> wrote:
>> > hi
>> > here's the new patch.
>> >
>> > changes: fixed bugs, added documentation
>> > added a step option, by default, while searching for macro-block, it
>> moves
>> > by 1 px in search area. this can be increased now.
>> > uses absolute values instead of squares.
>> > vector starts from center of macro-block, before it was at top-left.
>> >
>> > There were a lot of "Past duration 0.92 too large" warnings each
>> frame
>> > while encoding, perhaps its because the frame lags by 1 (as I store the
>> > requested frame in MEContext->next)
>> > this is fixed by changing: out_frame->pts = MEContext->next->pts
>> >
>> > details of the filter:
>> > The current filter is most basic block estimation technique which does
>> full
>> > search, which makes it quite slow, speed depends on the search area in
>> > which the block is searched and it is independent of the block size. if
>> we
>> > take N x N block size, and video is W x H. Then total no of blocks will
>> be
>> > WxH / N^2. if R is search parameter then the no of places block is
>> > searched: (2R + 1)^2, and for each block there will be N^2 iterations
>> (for
>> > comparing each pixel value which gives MAD. the one with minimum MAD is
>> > used to get MV), so for all blocks in a frame, there will be WxH/N^2 *
>> > (2R+1)^2*N^2 iterations = WxH * (2R+1)^2 iterations.
>> > e.g. for a 720p video, R = 7 it will be 1280*720*15^2 = 20736
>> iteration
>> > per frame.
>> >
>> > The current implementation only use first plane (luma) for estimation.
>> >
>> > Obviously I have to implement a faster algorithm, I chose this one for
>> > simplicity.
>>
>> > From 01f21e83d92389355105d4c9ba0ac1b835e343cb Mon Sep 17 00:00:00 2001
>> > From: dsmudhar <ds.mud...@gmail.com>
>> > Date: Sun, 10 Apr 2016 00:01:23 +0530
>> > Subject: [PATCH] added motion estimation filter
>> >
>> > ---
>> >  doc/filters.texi   |  16 +++
>> >  libavfilter/Makefile   |   1 +
>> >  libavfilter/allfilters.c   |   1 +
>> >  libavfilter/vf_mestimate.c | 245
>> +
>> >  4 files changed, 263 insertions(+)
>> >  create mode 100644 libavfilter/vf_mestimate.c
>> >
>> > diff --git a/doc/filters.texi b/doc/filters.texi
>> > index 82be06d..85757a1 100644
>> > --- a/doc/filters.texi
>> > +++ b/doc/filters.texi
>> > @@ -8933,6 +8933,22 @@ format=rgb24,mergeplanes=0x000102:yuv444p
>> >  @end example
>> >  @end itemize
>> >
>> > +@section mestimate
>> > +
>> > +Estimates the motion and generates motion vectors using block matching
>> algorithm.
>> > +
>> > +This filter accepts the following options:
>> > +@table @option
>> > +@item block
>> > +Set macroblock size. Default is @code{16}.
>> > +
>> > +@item search
>> > +Set search parameter. Default is @code{7}.
>> > +
>> > +@item step
>> > +Set step for movement of reference macroblock in search area. Default
>> is @code{1}.
>> > +@end table
>> > +
>> >  @section metadata, ametadata
>> >
>> >  Manipulate frame metadata.
>> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
>> > index 3a3de48..72b75d8 100644
>> > --- a/libavfilter/Makefile
>> > +++ b/libavfilter/Makefile
>> > @@ -198,6 +198,7 @@ OBJS-$(CONFIG_LUTYUV_FILTER) +=
>> vf_lut.o
>> >  OBJS-$(CONFIG_MASKEDMERGE_FILTER)+= vf_maskedmerge.o
>> framesync.o
>> >  OBJS-$(CONFIG_MCDEINT_FILTER)+= vf_mcdeint.o
>> >  OBJS-$(CONFIG_MERGEPLANES_FILTER)+= vf_mergeplanes.o
>> framesync.o
>> > +OBJS-$(CONFIG_MESTIMATE_FILTER)  += vf_mestimate.o
>> >  OBJS-$(CONFIG_METADATA_FILTER)   

Re: [FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-09 Thread Davinder Singh
ok, applied the changes. with new closest vector condition. will add
threshold now.

On Sun, Apr 10, 2016 at 12:47 AM Paul B Mahol <one...@gmail.com> wrote:

> On 4/9/16, Davinder Singh <ds.mud...@gmail.com> wrote:
> > hi
> > here's the new patch.
> >
> > changes: fixed bugs, added documentation
> > added a step option, by default, while searching for macro-block, it
> moves
> > by 1 px in search area. this can be increased now.
> > uses absolute values instead of squares.
> > vector starts from center of macro-block, before it was at top-left.
> >
> > There were a lot of "Past duration 0.92 too large" warnings each
> frame
> > while encoding, perhaps its because the frame lags by 1 (as I store the
> > requested frame in MEContext->next)
> > this is fixed by changing: out_frame->pts = MEContext->next->pts
> >
> > details of the filter:
> > The current filter is most basic block estimation technique which does
> full
> > search, which makes it quite slow, speed depends on the search area in
> > which the block is searched and it is independent of the block size. if
> we
> > take N x N block size, and video is W x H. Then total no of blocks will
> be
> > WxH / N^2. if R is search parameter then the no of places block is
> > searched: (2R + 1)^2, and for each block there will be N^2 iterations
> (for
> > comparing each pixel value which gives MAD. the one with minimum MAD is
> > used to get MV), so for all blocks in a frame, there will be WxH/N^2 *
> > (2R+1)^2*N^2 iterations = WxH * (2R+1)^2 iterations.
> > e.g. for a 720p video, R = 7 it will be 1280*720*15^2 = 20736
> iteration
> > per frame.
> >
> > The current implementation only use first plane (luma) for estimation.
> >
> > Obviously I have to implement a faster algorithm, I chose this one for
> > simplicity.
>
> > From 01f21e83d92389355105d4c9ba0ac1b835e343cb Mon Sep 17 00:00:00 2001
> > From: dsmudhar <ds.mud...@gmail.com>
> > Date: Sun, 10 Apr 2016 00:01:23 +0530
> > Subject: [PATCH] added motion estimation filter
> >
> > ---
> >  doc/filters.texi   |  16 +++
> >  libavfilter/Makefile   |   1 +
> >  libavfilter/allfilters.c   |   1 +
> >  libavfilter/vf_mestimate.c | 245
> +
> >  4 files changed, 263 insertions(+)
> >  create mode 100644 libavfilter/vf_mestimate.c
> >
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index 82be06d..85757a1 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -8933,6 +8933,22 @@ format=rgb24,mergeplanes=0x000102:yuv444p
> >  @end example
> >  @end itemize
> >
> > +@section mestimate
> > +
> > +Estimates the motion and generates motion vectors using block matching
> algorithm.
> > +
> > +This filter accepts the following options:
> > +@table @option
> > +@item block
> > +Set macroblock size. Default is @code{16}.
> > +
> > +@item search
> > +Set search parameter. Default is @code{7}.
> > +
> > +@item step
> > +Set step for movement of reference macroblock in search area. Default
> is @code{1}.
> > +@end table
> > +
> >  @section metadata, ametadata
> >
> >  Manipulate frame metadata.
> > diff --git a/libavfilter/Makefile b/libavfilter/Makefile
> > index 3a3de48..72b75d8 100644
> > --- a/libavfilter/Makefile
> > +++ b/libavfilter/Makefile
> > @@ -198,6 +198,7 @@ OBJS-$(CONFIG_LUTYUV_FILTER) +=
> vf_lut.o
> >  OBJS-$(CONFIG_MASKEDMERGE_FILTER)+= vf_maskedmerge.o
> framesync.o
> >  OBJS-$(CONFIG_MCDEINT_FILTER)+= vf_mcdeint.o
> >  OBJS-$(CONFIG_MERGEPLANES_FILTER)+= vf_mergeplanes.o
> framesync.o
> > +OBJS-$(CONFIG_MESTIMATE_FILTER)  += vf_mestimate.o
> >  OBJS-$(CONFIG_METADATA_FILTER)   += f_metadata.o
> >  OBJS-$(CONFIG_MPDECIMATE_FILTER) += vf_mpdecimate.o
> >  OBJS-$(CONFIG_NEGATE_FILTER) += vf_lut.o
> > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
> > index b6f4a2c..6e86fd8 100644
> > --- a/libavfilter/allfilters.c
> > +++ b/libavfilter/allfilters.c
> > @@ -219,6 +219,7 @@ void avfilter_register_all(void)
> >  REGISTER_FILTER(MASKEDMERGE,maskedmerge,    vf);
> >  REGISTER_FILTER(MCDEINT,mcdeint,vf);
> >  REGISTER_FILTER(MERGEPLANES,mergeplanes,vf);
> > +REGISTER_FILTER(MESTIMATE,  mestimate,  vf);
> 

Re: [FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-09 Thread Davinder Singh
hi
here's the new patch.

changes: fixed bugs, added documentation
added a step option, by default, while searching for macro-block, it moves
by 1 px in search area. this can be increased now.
uses absolute values instead of squares.
vector starts from center of macro-block, before it was at top-left.

There were a lot of "Past duration 0.92 too large" warnings each frame
while encoding, perhaps its because the frame lags by 1 (as I store the
requested frame in MEContext->next)
this is fixed by changing: out_frame->pts = MEContext->next->pts

details of the filter:
The current filter is most basic block estimation technique which does full
search, which makes it quite slow, speed depends on the search area in
which the block is searched and it is independent of the block size. if we
take N x N block size, and video is W x H. Then total no of blocks will be
WxH / N^2. if R is search parameter then the no of places block is
searched: (2R + 1)^2, and for each block there will be N^2 iterations (for
comparing each pixel value which gives MAD. the one with minimum MAD is
used to get MV), so for all blocks in a frame, there will be WxH/N^2 *
(2R+1)^2*N^2 iterations = WxH * (2R+1)^2 iterations.
e.g. for a 720p video, R = 7 it will be 1280*720*15^2 = 20736 iteration
per frame.

The current implementation only use first plane (luma) for estimation.

Obviously I have to implement a faster algorithm, I chose this one for
simplicity.


DSM_

"We are merely explorers of infinity, in the pursuit of absolute perfection"

On Tue, Apr 5, 2016 at 12:25 AM Michael Niedermayer 
wrote:

> [...]

missing documentation
>
> ive tested it as in
> ./ffplay matrixbench_mpeg2.mpg -vf mestimate=16,codecview=mv=7
> or
> ./ffplay ~/videos/matrixbench_mpeg2.mpg -vf mestimate=16:32,codecview=mv=7
>
> the vectors do not seem to represent motion, they all point in the
> same direction more or less
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> You can kill me, but you cannot change the truth.
>


0001-added-motion-estimation-filter.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-03 Thread Davinder Singh
sorry about that,
here is recreated patch.


DSM_

On Sun, Apr 3, 2016 at 11:09 PM Davinder Singh <ds.mud...@gmail.com> wrote:

> Qualification task for Motion interpolation project.
>
> here is basic motion estimation filter which uses block matching
> technique, does full search in (default) 7 px region by 1px step.
>
> Thanks,
> DSM_
>


0001-motion-estimation-filter.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [Patch][GSoC] Motion Estimation filter

2016-04-03 Thread Davinder Singh
Qualification task for Motion interpolation project.

here is basic motion estimation filter which uses block matching technique,
does full search in (default) 7 px region by 1px step.

Thanks,
DSM_


0001-added-motion-estimattion-filter.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel