Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-30 Thread Robert Krüger
On Tue, Aug 30, 2016 at 3:13 AM, Davinder Singh  wrote:

> On Sat, Aug 27, 2016 at 6:15 PM Robert Krüger 
> wrote:
>
> > [...]
> > what is the way to best contribute with test cases? I have two samples
> that
> > I use for testing, so far the results look very, very promising but there
> > are still a few artefact problems, so these could maybe serve as a good
> > test case. In some cases the artefacts almost certainly look like there
> is
> > a bug in motion vector calculation as a very large area suddenly begins
> to
> > move in which really only a small part is/should be moving.
> >
> > How do I make this available to you or other devs at this stage? Just
> trac
> > tickets or is it too early for that and you would like to work on this
> > differently? After all it is always a grey area, when this can be
> > considered solved, as it is a process of gradual improvements, so maybe
> > it's not well-suited for a ticket.
> >
> > Let me know. Happy to contribute samples and some testing time here and
> > there.
> >
>
> I'm currently testing support for unrestricted search area which can be
> used with EPZS, which has improved the quality.
> Once I send the patch you can test if it actually reduces the artifacts or
> doesn't make it worse.
>
>
OK, great. I'll test the patch when it's there.

Have you looked at the example with the moving wall? This really looks a
bit like a bug in motion vectors and I also had the impression that this
wasn't there when I was testing with earlier version from your branch but
cannot be 100% sure.


> For smaller details newer recursive algorithms should perform better. Like
> this one, https://www.osapublishing.org/jdt/abstract.cfm?uri=jdt-11-7-580
> which uses Modified 3D recursive search iteratively.
> So, at this point before any new algorithm is implemented, best way to test
> is to verify the experiments I do improves the quality for most of the
> samples or not.
>
>
Makes sense.


> One would like to compare PSNR, as it's hard compare each frame visually.
> http://ffmpeg.org/pipermail/ffmpeg-devel/2016-April/193067.html (for
> better
> results, original sample should be 60fps, subsampled to 30)
> for visual testing, I used to transcode interpolate sample to images and
> compared them to original ones.
>
> Thanks for testing.
>
> Thanks for building this great filter.

Robert
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-29 Thread Davinder Singh
On Mon, Aug 29, 2016 at 12:20 PM Clément Bœsch  wrote:

> On Sun, Aug 28, 2016 at 11:31:10AM +0200, Paul B Mahol wrote:
> > On Sat, Aug 27, 2016 at 2:45 PM, Robert Krüger  >
> > wrote:
> > >
> > > what is the way to best contribute with test cases? I have two samples
> that
> > > I use for testing, so far the results look very, very promising but
> there
> > > are still a few artefact problems, so these could maybe serve as a good
> > > test case. In some cases the artefacts almost certainly look like
> there is
> > > a bug in motion vector calculation as a very large area suddenly
> begins to
> > > move in which really only a small part is/should be moving.
> > >
> > > How do I make this available to you or other devs at this stage? Just
> trac
> > > tickets or is it too early for that and you would like to work on this
> > > differently? After all it is always a grey area, when this can be
> > > considered solved, as it is a process of gradual improvements, so maybe
> > > it's not well-suited for a ticket.
> > >
> > > Let me know. Happy to contribute samples and some testing time here and
> > > there.
> >
> >
> > You can provide them either publicly or privately to any of devs
> interested.
> > I'm always interested in short samples exhibiting the problem.
>
> Using http://b.pkh.me/sfx-sky.mov and comparing:
>
>   ./ffplay -flags2 +export_mvs sfx-sky.mov -vf codecview=mv=pf
>
> with
>
>   ./ffplay sfx-sky.mov -vf mestimate,codecview=mv=pf
>
> The encoded mvs looks much more meaningful that the ones found with
> mestimate. Typically, if you're looking for a global motion of some sort,
> the "native" mvs makes much more clear that there is a mostly static area
> at the bottom and a panning one on top with its direction pretty obvious.
> With mestimate, it just looks like small noise. Any plan to improve this?
>
> --
> Clément B
>

that's probably because the mestimate doesn't use penalty term as used in
minterpolate and encoders to make the motion field smooth. mestimate just
stores the best match. it can be easily done by adding
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/vf_minterpolate.c#L274
to
default function
https://github.com/FFmpeg/FFmpeg/blob/master/libavfilter/motion_estimation.c#L59

The reason I didn't do that yet, we've plans to make Motion Estimation API
and the cost function doesn't seem to be correct place for penalty term.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-29 Thread Davinder Singh
On Sat, Aug 27, 2016 at 6:15 PM Robert Krüger 
wrote:

> [...]
> what is the way to best contribute with test cases? I have two samples that
> I use for testing, so far the results look very, very promising but there
> are still a few artefact problems, so these could maybe serve as a good
> test case. In some cases the artefacts almost certainly look like there is
> a bug in motion vector calculation as a very large area suddenly begins to
> move in which really only a small part is/should be moving.
>
> How do I make this available to you or other devs at this stage? Just trac
> tickets or is it too early for that and you would like to work on this
> differently? After all it is always a grey area, when this can be
> considered solved, as it is a process of gradual improvements, so maybe
> it's not well-suited for a ticket.
>
> Let me know. Happy to contribute samples and some testing time here and
> there.
>

I'm currently testing support for unrestricted search area which can be
used with EPZS, which has improved the quality.
Once I send the patch you can test if it actually reduces the artifacts or
doesn't make it worse.

For smaller details newer recursive algorithms should perform better. Like
this one, https://www.osapublishing.org/jdt/abstract.cfm?uri=jdt-11-7-580
which uses Modified 3D recursive search iteratively.
So, at this point before any new algorithm is implemented, best way to test
is to verify the experiments I do improves the quality for most of the
samples or not.

One would like to compare PSNR, as it's hard compare each frame visually.
http://ffmpeg.org/pipermail/ffmpeg-devel/2016-April/193067.html (for better
results, original sample should be 60fps, subsampled to 30)
for visual testing, I used to transcode interpolate sample to images and
compared them to original ones.

Thanks for testing.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-29 Thread Clément Bœsch
On Sun, Aug 28, 2016 at 11:31:10AM +0200, Paul B Mahol wrote:
> On Sat, Aug 27, 2016 at 2:45 PM, Robert Krüger 
> wrote:
> >
> > what is the way to best contribute with test cases? I have two samples that
> > I use for testing, so far the results look very, very promising but there
> > are still a few artefact problems, so these could maybe serve as a good
> > test case. In some cases the artefacts almost certainly look like there is
> > a bug in motion vector calculation as a very large area suddenly begins to
> > move in which really only a small part is/should be moving.
> >
> > How do I make this available to you or other devs at this stage? Just trac
> > tickets or is it too early for that and you would like to work on this
> > differently? After all it is always a grey area, when this can be
> > considered solved, as it is a process of gradual improvements, so maybe
> > it's not well-suited for a ticket.
> >
> > Let me know. Happy to contribute samples and some testing time here and
> > there.
> 
> 
> You can provide them either publicly or privately to any of devs interested.
> I'm always interested in short samples exhibiting the problem.

Using http://b.pkh.me/sfx-sky.mov and comparing:

  ./ffplay -flags2 +export_mvs sfx-sky.mov -vf codecview=mv=pf

with

  ./ffplay sfx-sky.mov -vf mestimate,codecview=mv=pf

The encoded mvs looks much more meaningful that the ones found with
mestimate. Typically, if you're looking for a global motion of some sort,
the "native" mvs makes much more clear that there is a mostly static area
at the bottom and a panning one on top with its direction pretty obvious.
With mestimate, it just looks like small noise. Any plan to improve this?

-- 
Clément B.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-28 Thread Robert Krüger
On Sun, Aug 28, 2016 at 11:31 AM, Paul B Mahol  wrote:

> On Sat, Aug 27, 2016 at 2:45 PM, Robert Krüger 
> wrote:
> >
> > what is the way to best contribute with test cases? I have two samples
> that
> > I use for testing, so far the results look very, very promising but there
> > are still a few artefact problems, so these could maybe serve as a good
> > test case. In some cases the artefacts almost certainly look like there
> is
> > a bug in motion vector calculation as a very large area suddenly begins
> to
> > move in which really only a small part is/should be moving.
> >
> > How do I make this available to you or other devs at this stage? Just
> trac
> > tickets or is it too early for that and you would like to work on this
> > differently? After all it is always a grey area, when this can be
> > considered solved, as it is a process of gradual improvements, so maybe
> > it's not well-suited for a ticket.
> >
> > Let me know. Happy to contribute samples and some testing time here and
> > there.
>
>
> You can provide them either publicly or privately to any of devs
> interested.
> I'm always interested in short samples exhibiting the problem.
>
>
OK, here are two short samples which I used to check the suitability of
minterpolate to create super-slow-motion shots from suitable motives:

https://www.dropbox.com/s/rklhbi6zxyrvwho/reload-gun.mov?dl=1
https://www.dropbox.com/s/c7kcqd4w8ksgpqv/running.mov?dl=1

for example, I tried this:

ffmpeg -i running.mov -vf
minterpolate=fps=250:mc_mode=obmc:me=epzs:me_mode=bidir:vsbmc=1:search_param=64:mb_size=8
-c:v mpeg4 -q:v 1 -an -y running-250-obmc-bidir-sp64-mb8-vsbmc-epzs-n.mov

Both videos look very good except for a few artefacts:
1) In the reload-gun example parts of the wall (a thin strip underneath the
magazine) move which looks like incorrect motion vectors (ca. 00:00:00:090
in the result)
2) In the running example the hand shows noticeable artefacts, especially
towards the end of the video
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-28 Thread Paul B Mahol
On Sat, Aug 27, 2016 at 2:45 PM, Robert Krüger 
wrote:
>
> what is the way to best contribute with test cases? I have two samples that
> I use for testing, so far the results look very, very promising but there
> are still a few artefact problems, so these could maybe serve as a good
> test case. In some cases the artefacts almost certainly look like there is
> a bug in motion vector calculation as a very large area suddenly begins to
> move in which really only a small part is/should be moving.
>
> How do I make this available to you or other devs at this stage? Just trac
> tickets or is it too early for that and you would like to work on this
> differently? After all it is always a grey area, when this can be
> considered solved, as it is a process of gradual improvements, so maybe
> it's not well-suited for a ticket.
>
> Let me know. Happy to contribute samples and some testing time here and
> there.


You can provide them either publicly or privately to any of devs interested.
I'm always interested in short samples exhibiting the problem.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-27 Thread Robert Krüger
On Thu, Aug 25, 2016 at 10:28 PM, Davinder Singh 
wrote:

> On Thu, Aug 25, 2016 at 5:01 AM Andy Furniss  wrote:
>
> >
> >
> > I am testing with a somewhat artificial sample in that it's a framerate
> > de-interlace + scale down of a 1080i master, though it is "real" in the
> > sense that I may want to repair similar files where people have produced
> > a juddery mess by using yadif=0.
> >
>
> thanks for testing.
>
> >
> > It's very fast and my old (2010 Panasonic plasma) TV can't interpolate
> > it without artifacting in a few places, it can interpolate a field rate
> > version flawlessly and both mcfps and minterpolate do a lot better with
> > a 50fps master version -> 100fps, though they are still not perfect.
> >
> > As well as being fast it has overlays of varying opacity and some
> > repeating patterns just to make things even harder.
> >
> > Some observations while trying to get the best result - given the number
> > of options only a small subset could be tested:
> >
> > aobmc vs ombc, vsbmc 0 or 1 = no real difference.
> >
>
> now our main focus will be on "better" motion estimation that removes
> artifacts in fast motion, rather than little tweaks like these.
>
>
> > Any me method other than epzs had far too many artifacts to be used.
> >
> > Raising search_param to 48 or 64 or 128 just causes new artifacts.
> >
>
> that hopefully could be fixed. working on it.
>
>
> >
> > Reducing mb_size causes new artifacts.
> >
>
> yes for higher resolution. for very smaller, it could be essential.
>
>
> > bilat vs bidir - similar but bilat has some artifacts on a still shot
> > near the end of the defaults sample uploaded. bidir sometimes has green
> > near the top of the screen.
> >
>
> i see that green line in other samples too. investigating.
>
>
> >
> > There are of course many small artifacts, to be seen by slowmo/framestep
> > for both minterpolate and mcfps. Viewing fullspeed mcfps artifacts less
> > on the car when it touches the edges than minterpolate. Frame stepping
> > shows mcfps doesn't blend/blur as much on really fast moving background
> > as minterpolate does.
> >
> > Included in the link below (which is a tar to stop google drive making
> > terrible low quality/fps previews) are the 25fps master file, mcfps
> > interpolation to 50fps, minterpolate with default options and
> > minterpolate with defaults + bidir.
> >
> >
> > https://drive.google.com/file/d/0BxP5-S1t9VEEM2VrTzlVdGZURVk/view?
> usp=sharing
>
>
> thanks :)
>
> DSM_
>
>
what is the way to best contribute with test cases? I have two samples that
I use for testing, so far the results look very, very promising but there
are still a few artefact problems, so these could maybe serve as a good
test case. In some cases the artefacts almost certainly look like there is
a bug in motion vector calculation as a very large area suddenly begins to
move in which really only a small part is/should be moving.

How do I make this available to you or other devs at this stage? Just trac
tickets or is it too early for that and you would like to work on this
differently? After all it is always a grey area, when this can be
considered solved, as it is a process of gradual improvements, so maybe
it's not well-suited for a ticket.

Let me know. Happy to contribute samples and some testing time here and
there.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-26 Thread Michael Niedermayer
On Thu, Aug 25, 2016 at 08:17:03PM +, Davinder Singh wrote:
> On Thu, Aug 25, 2016 at 8:03 PM Michael Niedermayer 
> wrote:
> 
> > [...]
> >
> > why do these not try predictors like epzs / umh ?
> > i guess some paper doesnt say exlpicitly it should be done
> > but really it should be done for all predictive zonal searches IMO
> >
> 
> this should be in different patch. no?
> yeah, the paper doesn't specify use of predictors. i thought DS and HEX are
> just new efficient patterns.
> 
> 
> >
> > [...]
> > AVOption is not compatible with general enums, as C does not gurantee
> > them to be stored in an int, it just happens to work on most platforms
> >
> > [...]
> > with this style of smoothness cost you likely want to make an exception
> > for the 0,0 vector (giving it the same "penalty" as the median or even
> > very slightly less)
> > This would normally be implemented by not adding the penalty on
> > the 0,0 perdictor check but as its implemented in the compare function
> > itself it would need a check
> > i think it would slighty improve quality. Of course if it does not then
> > ignore this suggestion
> >
> > also i will apply this patchset once the issues raised here are fixed
> > if noone objects, its much easier and more efficient to work in main
> > git than on top of a growing patch
> >
> > Thanks
> >
> > [...]
> >
> > --
> > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
> >
> > Republics decline into democracies and democracies degenerate into
> > despotisms. -- Aristotle
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >

>  doc/filters.texi|  128 
>  libavfilter/Makefile|2 
>  libavfilter/allfilters.c|2 
>  libavfilter/motion_estimation.c |  432 +
>  libavfilter/motion_estimation.h |   87 ++
>  libavfilter/vf_mestimate.c  |  377 
>  libavfilter/vf_minterpolate.c   | 1247 
> 
>  7 files changed, 2275 insertions(+)
> b772583211ae3ed639af75173d4497c4c2850a8a  
> 0001-added-motion-estimation-and-interpolation-filters-v5F.patch
> From 167e0a8093b02f51eb753454093c6c1eabba4db6 Mon Sep 17 00:00:00 2001
> From: dsmudhar 
> Date: Tue, 23 Aug 2016 17:50:35 +0530
> Subject: [PATCH] added motion estimation and interpolation filters

applied

you might want to send a patch to add yourself to MAINTAINERs

thanks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Avoid a single point of failure, be that a person or equipment.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Michael Niedermayer
On Thu, Aug 25, 2016 at 08:17:03PM +, Davinder Singh wrote:
> On Thu, Aug 25, 2016 at 8:03 PM Michael Niedermayer 
> wrote:
> 
> > [...]
> >
> > why do these not try predictors like epzs / umh ?
> > i guess some paper doesnt say exlpicitly it should be done
> > but really it should be done for all predictive zonal searches IMO
> >
> 
> this should be in different patch. no?

yes, different patch is probably easier

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The worst form of inequality is to try to make unequal things equal.
-- Aristotle


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Davinder Singh
On Thu, Aug 25, 2016 at 5:01 AM Andy Furniss  wrote:

>
>
> I am testing with a somewhat artificial sample in that it's a framerate
> de-interlace + scale down of a 1080i master, though it is "real" in the
> sense that I may want to repair similar files where people have produced
> a juddery mess by using yadif=0.
>

thanks for testing.

>
> It's very fast and my old (2010 Panasonic plasma) TV can't interpolate
> it without artifacting in a few places, it can interpolate a field rate
> version flawlessly and both mcfps and minterpolate do a lot better with
> a 50fps master version -> 100fps, though they are still not perfect.
>
> As well as being fast it has overlays of varying opacity and some
> repeating patterns just to make things even harder.
>
> Some observations while trying to get the best result - given the number
> of options only a small subset could be tested:
>
> aobmc vs ombc, vsbmc 0 or 1 = no real difference.
>

now our main focus will be on "better" motion estimation that removes
artifacts in fast motion, rather than little tweaks like these.


> Any me method other than epzs had far too many artifacts to be used.
>
> Raising search_param to 48 or 64 or 128 just causes new artifacts.
>

that hopefully could be fixed. working on it.


>
> Reducing mb_size causes new artifacts.
>

yes for higher resolution. for very smaller, it could be essential.


> bilat vs bidir - similar but bilat has some artifacts on a still shot
> near the end of the defaults sample uploaded. bidir sometimes has green
> near the top of the screen.
>

i see that green line in other samples too. investigating.


>
> There are of course many small artifacts, to be seen by slowmo/framestep
> for both minterpolate and mcfps. Viewing fullspeed mcfps artifacts less
> on the car when it touches the edges than minterpolate. Frame stepping
> shows mcfps doesn't blend/blur as much on really fast moving background
> as minterpolate does.
>
> Included in the link below (which is a tar to stop google drive making
> terrible low quality/fps previews) are the 25fps master file, mcfps
> interpolation to 50fps, minterpolate with default options and
> minterpolate with defaults + bidir.
>
>
> https://drive.google.com/file/d/0BxP5-S1t9VEEM2VrTzlVdGZURVk/view?usp=sharing


thanks :)

DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Davinder Singh
On Thu, Aug 25, 2016 at 8:03 PM Michael Niedermayer 
wrote:

> [...]
>
> why do these not try predictors like epzs / umh ?
> i guess some paper doesnt say exlpicitly it should be done
> but really it should be done for all predictive zonal searches IMO
>

this should be in different patch. no?
yeah, the paper doesn't specify use of predictors. i thought DS and HEX are
just new efficient patterns.


>
> [...]
> AVOption is not compatible with general enums, as C does not gurantee
> them to be stored in an int, it just happens to work on most platforms
>
> [...]
> with this style of smoothness cost you likely want to make an exception
> for the 0,0 vector (giving it the same "penalty" as the median or even
> very slightly less)
> This would normally be implemented by not adding the penalty on
> the 0,0 perdictor check but as its implemented in the compare function
> itself it would need a check
> i think it would slighty improve quality. Of course if it does not then
> ignore this suggestion
>
> also i will apply this patchset once the issues raised here are fixed
> if noone objects, its much easier and more efficient to work in main
> git than on top of a growing patch
>
> Thanks
>
> [...]
>
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> Republics decline into democracies and democracies degenerate into
> despotisms. -- Aristotle
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>


0001-added-motion-estimation-and-interpolation-filters-v5F.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-25 Thread Michael Niedermayer
On Tue, Aug 23, 2016 at 01:17:47PM +, Davinder Singh wrote:
> On Tue, Aug 23, 2016 at 5:38 AM Andy Furniss  wrote:
> 
> > [...]
> >
> > Nice I can see the edges are better than the last version.
> >
> > The doc/filters.texi hunk doesn't apply to git master.
> >
> > I was going to post some comparisons with mcfps tonight, but I'll need
> > to redo them to see what's changed.
> 
> 
> fixed docs conflict.
> 
> thanks for testing!

[...]
> +uint64_t ff_me_search_tss(AVMotionEstContext *me_ctx, int x_mb, int y_mb, 
> int *mv)
> +{
> +int x, y;
> +int x_min = FFMAX(me_ctx->x_min, x_mb - me_ctx->search_param);
> +int y_min = FFMAX(me_ctx->y_min, y_mb - me_ctx->search_param);
> +int x_max = FFMIN(x_mb + me_ctx->search_param, me_ctx->x_max);
> +int y_max = FFMIN(y_mb + me_ctx->search_param, me_ctx->y_max);
> +uint64_t cost, cost_min;
> +int step = ROUNDED_DIV(me_ctx->search_param, 2);
> +int i;
> +

> +int square[8][2] = {{0,-1}, {0,1}, {-1,0}, {1,0}, {-1,-1}, {-1,1}, 
> {1,-1}, {1,1}};

const

> +
> +mv[0] = x_mb;
> +mv[1] = y_mb;
> +
> +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> +return cost_min;
> +
> +do {
> +x = mv[0];
> +y = mv[1];
> +
> +for (i = 0; i < 8; i++)
> +COST_P_MV(x + square[i][0] * step, y + square[i][1] * step);
> +

> +step = step / 2;

 >>1 might be faster


> +
> +} while (step > 0);
> +
> +return cost_min;
> +}
> +
> +uint64_t ff_me_search_tdls(AVMotionEstContext *me_ctx, int x_mb, int y_mb, 
> int *mv)
> +{
> +int x, y;
> +int x_min = FFMAX(me_ctx->x_min, x_mb - me_ctx->search_param);
> +int y_min = FFMAX(me_ctx->y_min, y_mb - me_ctx->search_param);
> +int x_max = FFMIN(x_mb + me_ctx->search_param, me_ctx->x_max);
> +int y_max = FFMIN(y_mb + me_ctx->search_param, me_ctx->y_max);
> +uint64_t cost, cost_min;
> +int step = ROUNDED_DIV(me_ctx->search_param, 2);
> +int i;
> +

> +int dia2[4][2] = {{-1, 0}, { 0,-1},
> +  { 1, 0}, { 0, 1}};

const


> +
> +mv[0] = x_mb;
> +mv[1] = y_mb;
> +
> +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> +return cost_min;
> +
> +do {
> +x = mv[0];
> +y = mv[1];
> +
> +for (i = 0; i < 4; i++)
> +COST_P_MV(x + dia2[i][0] * step, y + dia2[i][1] * step);
> +
> +if (x == mv[0] && y == mv[1])
> +step = step / 2;
> +
> +} while (step > 0);
> +
> +return cost_min;
> +}
> +
> +uint64_t ff_me_search_ntss(AVMotionEstContext *me_ctx, int x_mb, int y_mb, 
> int *mv)
> +{
> +int x, y;
> +int x_min = FFMAX(me_ctx->x_min, x_mb - me_ctx->search_param);
> +int y_min = FFMAX(me_ctx->y_min, y_mb - me_ctx->search_param);
> +int x_max = FFMIN(x_mb + me_ctx->search_param, me_ctx->x_max);
> +int y_max = FFMIN(y_mb + me_ctx->search_param, me_ctx->y_max);
> +uint64_t cost, cost_min;
> +int step = ROUNDED_DIV(me_ctx->search_param, 2);
> +int first_step = 1;
> +int i;
> +

> +int square[8][2] = {{0,-1}, {0,1}, {-1,0}, {1,0}, {-1,-1}, {-1,1}, 
> {1,-1}, {1,1}};

const

[...]
> +uint64_t ff_me_search_ds(AVMotionEstContext *me_ctx, int x_mb, int y_mb, int 
> *mv)
> +{
> +int x, y;
> +int x_min = FFMAX(me_ctx->x_min, x_mb - me_ctx->search_param);
> +int y_min = FFMAX(me_ctx->y_min, y_mb - me_ctx->search_param);
> +int x_max = FFMIN(x_mb + me_ctx->search_param, me_ctx->x_max);
> +int y_max = FFMIN(y_mb + me_ctx->search_param, me_ctx->y_max);
> +uint64_t cost, cost_min;
> +int i;
> +int dir_x, dir_y;
> +
> +int dia[8][2] = {{-2, 0}, {-1,-1}, { 0,-2}, { 1,-1},
> + { 2, 0}, { 1, 1}, { 0, 2}, {-1, 1}};
> +int dia2[4][2] = {{-1, 0}, { 0,-1},
> +  { 1, 0}, { 0, 1}};
> +
> +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> +return cost_min;
> +
> +x = x_mb; y = y_mb;
> +dir_x = dir_y = 0;
> +
> +do {
> +x = mv[0];
> +y = mv[1];
> +
> +#if 1
> +for (i = 0; i < 8; i++)
> +COST_P_MV(x + dia[i][0], y + dia[i][1]);
> +#else
> +/* this version skips previously examined 3 or 5 locations based on 
> prev origin */
> +if (dir_x <= 0)
> +COST_P_MV(x - 2, y);
> +if (dir_x <= 0 && dir_y <= 0)
> +COST_P_MV(x - 1, y - 1);
> +if (dir_y <= 0)
> +COST_P_MV(x, y - 2);
> +if (dir_x >= 0 && dir_y <= 0)
> +COST_P_MV(x + 1, y - 1);
> +if (dir_x >= 0)
> +COST_P_MV(x + 2, y);
> +if (dir_x >= 0 && dir_y >= 0)
> +COST_P_MV(x + 1, y + 1);
> +if (dir_y >= 0)
> +COST_P_MV(x, y + 2);
> +if (dir_x <= 0 && dir_y >= 0)
> +COST_P_MV(x - 1, y + 1);
> +
> +dir_x = mv[0] - x;
> +dir_y = mv[1] - y;
> +#endif
> +
> +} 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-24 Thread Andy Furniss

Davinder Singh wrote:

On Tue, Aug 23, 2016 at 5:38 AM Andy Furniss 
wrote:


[...]

Nice I can see the edges are better than the last version.

The doc/filters.texi hunk doesn't apply to git master.

I was going to post some comparisons with mcfps tonight, but I'll
need to redo them to see what's changed.


I am testing with a somewhat artificial sample in that it's a framerate
de-interlace + scale down of a 1080i master, though it is "real" in the
sense that I may want to repair similar files where people have produced
a juddery mess by using yadif=0.

It's very fast and my old (2010 Panasonic plasma) TV can't interpolate
it without artifacting in a few places, it can interpolate a field rate
version flawlessly and both mcfps and minterpolate do a lot better with
a 50fps master version -> 100fps, though they are still not perfect.

As well as being fast it has overlays of varying opacity and some
repeating patterns just to make things even harder.

Some observations while trying to get the best result - given the number
of options only a small subset could be tested:

aobmc vs ombc, vsbmc 0 or 1 = no real difference.

Any me method other than epzs had far too many artifacts to be used.

Raising search_param to 48 or 64 or 128 just causes new artifacts.

Reducing mb_size causes new artifacts.

bilat vs bidir - similar but bilat has some artifacts on a still shot
near the end of the defaults sample uploaded. bidir sometimes has green
near the top of the screen.

There are of course many small artifacts, to be seen by slowmo/framestep
for both minterpolate and mcfps. Viewing fullspeed mcfps artifacts less
on the car when it touches the edges than minterpolate. Frame stepping
shows mcfps doesn't blend/blur as much on really fast moving background
as minterpolate does.

Included in the link below (which is a tar to stop google drive making
terrible low quality/fps previews) are the 25fps master file, mcfps
interpolation to 50fps, minterpolate with default options and
minterpolate with defaults + bidir.

https://drive.google.com/file/d/0BxP5-S1t9VEEM2VrTzlVdGZURVk/view?usp=sharing
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-23 Thread Davinder Singh
On Tue, Aug 23, 2016 at 5:38 AM Andy Furniss  wrote:

> [...]
>
> Nice I can see the edges are better than the last version.
>
> The doc/filters.texi hunk doesn't apply to git master.
>
> I was going to post some comparisons with mcfps tonight, but I'll need
> to redo them to see what's changed.


fixed docs conflict.

thanks for testing!


0001-added-motion-estimation-and-interpolation-filters.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-22 Thread Andy Furniss

Davinder Singh wrote:

On Wed, Jun 1, 2016 at 4:13 AM Davinder Singh  wrote:


[...]



final patch attached. please review.

this includes bug fixes and various other improvements. also added filter
docs.


Nice I can see the edges are better than the last version.

The doc/filters.texi hunk doesn't apply to git master.

I was going to post some comparisons with mcfps tonight, but I'll need 
to redo them to see what's changed.



___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-22 Thread Davinder Singh
On Wed, Jun 1, 2016 at 4:13 AM Davinder Singh  wrote:

> [...]
>

final patch attached. please review.

this includes bug fixes and various other improvements. also added filter
docs.


0001-added-motion-estimation-and-interpolation-filters-v3F.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Michael Niedermayer
On Sat, Aug 20, 2016 at 06:39:17PM +, Davinder Singh wrote:
> On Sat, Aug 20, 2016 at 5:45 PM Michael Niedermayer 
> wrote:
> 
> > how does it perform with matrixbench instead of BBB ?
> >
> > as reference 100fps matrixbench generated with mcfps
> > from https://github.com/michaelni/FFmpeg/tree/mcfps
> > ./ffmpeg -i matrixbench_mpeg2.mpg -vf 'mcfps=3:100,setpts=4*PTS'
> > output for easy vissual comparission:
> > http://ffmpeg.org/~michael/matrix100.avi
> 
> 
> have a look: http://www.mediafire.com/?sssw8tqj5kn3vbk

thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

What does censorship reveal? It reveals fear. -- Julian Assange


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Davinder Singh
On Sat, Aug 20, 2016 at 5:45 PM Michael Niedermayer 
wrote:

> how does it perform with matrixbench instead of BBB ?
>
> as reference 100fps matrixbench generated with mcfps
> from https://github.com/michaelni/FFmpeg/tree/mcfps
> ./ffmpeg -i matrixbench_mpeg2.mpg -vf 'mcfps=3:100,setpts=4*PTS'
> output for easy vissual comparission:
> http://ffmpeg.org/~michael/matrix100.avi


have a look: http://www.mediafire.com/?sssw8tqj5kn3vbk
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Davinder Singh
On Fri, Aug 19, 2016 at 7:59 PM Robert Krüger 
wrote:

> [...]

Impressive results, great job!

thanks :)

>
> I just tried  minterpolate=fps=250:mc_mode=aobmc:me=epzs and did have some
> artefacts in one of my slowmo samples but overall the quality is very, very
> nice! If you're interested in more samples or in more testing, let me know.
>

search_param 32 (default) works best for me for 720p videos. for 1080p
higher can be better, it reduce artifiacts in fast motion. for low end
(480p) p=16 works fine. you can also try bidir me_mode.

>
> Is the command line I used the one best for reducing artefacts or are there
> options known to be better in terms of artefact reduction?
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-20 Thread Michael Niedermayer
On Thu, Aug 18, 2016 at 07:26:39PM +, Davinder Singh wrote:
> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:
> 
> > [...]
> >
> 
> i tried to modify EPZS. i removed the early termination threshold which
> skip some predictors :-/
> new score:
> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
> 
> original epzs:
> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200

how does it perform with matrixbench instead of BBB ?

as reference 100fps matrixbench generated with mcfps
from https://github.com/michaelni/FFmpeg/tree/mcfps
./ffmpeg -i matrixbench_mpeg2.mpg -vf 'mcfps=3:100,setpts=4*PTS' 
output for easy vissual comparission:
http://ffmpeg.org/~michael/matrix100.avi

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

The worst form of inequality is to try to make unequal things equal.
-- Aristotle


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Robert Krüger
On Fri, Aug 19, 2016 at 4:17 PM, Paul B Mahol  wrote:

> On 8/19/16, Davinder Singh  wrote:
> > On Fri, Aug 19, 2016 at 3:27 AM Paul B Mahol  wrote:
> >
> >> On 8/18/16, Paul B Mahol  wrote:
> >> > On 8/18/16, Davinder Singh  wrote:
> >> >> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol 
> wrote:
> >> >>
> >> >>> [...]
> >> >>>
> >> >>
> >> >> i tried to modify EPZS. i removed the early termination threshold
> which
> >> >> skip some predictors :-/
> >> >> new score:
> >> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> >> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
> >> >>
> >> >> original epzs:
> >> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> >> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
> >> >>
> >> >> epzs uses small diamond pattern. a new pattern could also help.
> >> >>
> >> >> Please post patch like last time.
> >> >>>
> >> >>
> >> >> latest patch attached.
> >> >>
> >> >
> >> > UMH ME is still somehow buggy.
> >> >
> >> > EPZS seems good, great work!
> >>
> >
> > what epzs did that i couldn't be able to do with umh is, it fixed lot of
> > artifacts that require bigger search window. if i increase search param
> > with umh it increase the artifacts. same happen with esa.
> > i guess umh uses less predictors but a better search pattern. if we
> combine
> > both epzs and uhm, it should increase the quality further.
> >
> >
> >> Actually after second look EPZS is not much better than UMH here.
> >>
>
> 720p parkjoy sample looks fine with EPZS it seems.
>
> >
> > please give me link to the video that you tested.
>
> http://samples.ffmpeg.org/benchmark/testsuite1/matrixbench_mpeg2.mpg
>
> Too much dark scenes.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

Impressive results, great job!

I just tried  minterpolate=fps=250:mc_mode=aobmc:me=epzs and did have some
artefacts in one of my slowmo samples but overall the quality is very, very
nice! If you're interested in more samples or in more testing, let me know.

Is the command line I used the one best for reducing artefacts or are there
options known to be better in terms of artefact reduction?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Paul B Mahol
On 8/19/16, Davinder Singh  wrote:
> On Fri, Aug 19, 2016 at 3:27 AM Paul B Mahol  wrote:
>
>> On 8/18/16, Paul B Mahol  wrote:
>> > On 8/18/16, Davinder Singh  wrote:
>> >> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:
>> >>
>> >>> [...]
>> >>>
>> >>
>> >> i tried to modify EPZS. i removed the early termination threshold which
>> >> skip some predictors :-/
>> >> new score:
>> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
>> >> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
>> >>
>> >> original epzs:
>> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
>> >> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
>> >>
>> >> epzs uses small diamond pattern. a new pattern could also help.
>> >>
>> >> Please post patch like last time.
>> >>>
>> >>
>> >> latest patch attached.
>> >>
>> >
>> > UMH ME is still somehow buggy.
>> >
>> > EPZS seems good, great work!
>>
>
> what epzs did that i couldn't be able to do with umh is, it fixed lot of
> artifacts that require bigger search window. if i increase search param
> with umh it increase the artifacts. same happen with esa.
> i guess umh uses less predictors but a better search pattern. if we combine
> both epzs and uhm, it should increase the quality further.
>
>
>> Actually after second look EPZS is not much better than UMH here.
>>

720p parkjoy sample looks fine with EPZS it seems.

>
> please give me link to the video that you tested.

http://samples.ffmpeg.org/benchmark/testsuite1/matrixbench_mpeg2.mpg

Too much dark scenes.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Moritz Barsnick
On Fri, Aug 19, 2016 at 11:19:22 +, Davinder Singh wrote:
> > Same here and many other places. "!=" is a valid operator. ;)
> 
> yes, that would be in case of == operator, not = operator, no?

D'uh, stupid me, I missed that. Sorry!

> will do. can you tell which is faster?

I *believe* switch/case is faster (IIRC):
- The code doesn't need to check a chain of if() cases to get to the
  later ones.
- The compiler can create a look-up table and jump through to the
  correct code block.

Moritz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Davinder Singh
On Fri, Aug 19, 2016 at 3:27 AM Paul B Mahol  wrote:

> On 8/18/16, Paul B Mahol  wrote:
> > On 8/18/16, Davinder Singh  wrote:
> >> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:
> >>
> >>> [...]
> >>>
> >>
> >> i tried to modify EPZS. i removed the early termination threshold which
> >> skip some predictors :-/
> >> new score:
> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
> >>
> >> original epzs:
> >> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> >> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
> >>
> >> epzs uses small diamond pattern. a new pattern could also help.
> >>
> >> Please post patch like last time.
> >>>
> >>
> >> latest patch attached.
> >>
> >
> > UMH ME is still somehow buggy.
> >
> > EPZS seems good, great work!
>

what epzs did that i couldn't be able to do with umh is, it fixed lot of
artifacts that require bigger search window. if i increase search param
with umh it increase the artifacts. same happen with esa.
i guess umh uses less predictors but a better search pattern. if we combine
both epzs and uhm, it should increase the quality further.


> Actually after second look EPZS is not much better than UMH here.
>

please give me link to the video that you tested.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-19 Thread Davinder Singh
On Fri, Aug 19, 2016 at 1:50 AM Moritz Barsnick  wrote:

> On Thu, Aug 18, 2016 at 19:26:39 +, Davinder Singh wrote:
>
> > +@table @option
> > +@item algo
> > +Set the algorithm to be used. Accepts one of the following values:
> > +
> > +@table @samp
> > +@item ebma
> > +Exhaustive block matching algorithm.
> > +@end table
> > +Default value is @samp{ebma}.
> [...]
> > +{ "method", "specify motion estimation method", OFFSET(method),
> AV_OPT_TYPE_INT, {.i64 = ME_METHOD_ESA}, ME_METHOD_ESA, ME_METHOD_UMH,
> FLAGS, "method" },
> > +CONST("esa",   "exhaustive search",
> ME_METHOD_ESA,   "method"),
> > +CONST("tss",   "three step search",
> ME_METHOD_TSS,   "method"),
> > +CONST("tdls",  "two dimensional logarithmic search",
> ME_METHOD_TDLS,  "method"),
> > +CONST("ntss",  "new three step search",
> ME_METHOD_NTSS,  "method"),
> > +CONST("fss",   "four step search",
>  ME_METHOD_FSS,   "method"),
> > +CONST("ds","diamond search",
>  ME_METHOD_DS,"method"),
> > +CONST("hexbs", "hexagon-based search",
>  ME_METHOD_HEXBS, "method"),
> > +CONST("epzs",  "enhanced predictive zonal search",
>  ME_METHOD_EPZS,  "method"),
> > +CONST("umh",   "uneven multi-hexagon search",
> ME_METHOD_UMH,   "method"),
>
> Documentation mismatches implementation. I think you forgot to adapt
> the former to your modifications.
> a) It's not "algo", it's "method".
> b) Default is "esa", not the non-existent "ebma".
> c) You should actually list all possible values.
>
> Furthermore, documentation for minterpolate is missing.
>

docs are yet to be updated.


> > +#define COST_MV(x, y)\
> > +{\
> > +cost = me_ctx->get_cost(me_ctx, x_mb, y_mb, x, y);\
> > +if (cost < cost_min) {\
> > +cost_min = cost;\
> > +mv[0] = x;\
> > +mv[1] = y;\
> > +}\
> > +}
>
> The recommendation for function macros is to wrap the definition into a
> "do { } while (0)". You do do that in other places.
>

will do.


>
> > +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
>
> Why not
>if (cost_min != me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb))
> ??
>
> > +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> > +return cost_min;
>
> Same here and many other places. "!=" is a valid operator. ;)
>

yes, that would be in case of == operator, not = operator, no?


> > +#if 1
> > +for (i = 0; i < 8; i++)
> > +COST_P_MV(x + dia[i][0], y + dia[i][1]);
> > +#else
>
> These checks will disappear in the final version?
>
>
yes.


>
> > +{ "fps", "specify the frame rate", OFFSET(frame_rate),
> AV_OPT_TYPE_RATIONAL, {.dbl = 60}, 0, INT_MAX, FLAGS },
>
> Could you handle this with an AV_OPT_TYPE_VIDEO_RATE, made specially
> for cases such as this?
>

ok, will look into it.


>
> > +{ "mb_size", "specify the macroblock size", OFFSET(mb_size),
> AV_OPT_TYPE_INT, {.i64 = 16}, 4, 16, FLAGS },
> > +{ "search_param", "specify search parameter", OFFSET(search_param),
> AV_OPT_TYPE_INT, {.i64 = 32}, 4, INT_MAX, FLAGS },
>
> You can drop the "specify the" part. Every option lets you specify
> something. ;-)
>

sure. i thought of doing that while updating docs.


>
> > +//int term = (mv_x * mv_x + mv_y * mv_y);
> > +//int term = (FFABS(mv_x - me_ctx->pred_x) + FFABS(mv_y -
> me_ctx->pred_y));
> > +//fprintf(stdout, "sbad: %llu, term: %d\n", sbad, term);
> > +return sbad;// + term;
>
> Needs to be fixed?


> > +avcodec_get_chroma_sub_sample(inlink->format,
> _ctx->chroma_h_shift, _ctx->chroma_v_shift); //TODO remove
>
> To do.
>
> > +if (!(mi_ctx->int_blocks =
> av_mallocz_array(mi_ctx->b_count, sizeof(Block
>
> !=
>
> > +if (mi_ctx->me_method == ME_METHOD_ESA)
> > +ff_me_search_esa(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_TSS)
> > +ff_me_search_tss(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_TDLS)
> > +ff_me_search_tdls(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_NTSS)
> > +ff_me_search_ntss(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_FSS)
> > +ff_me_search_fss(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_DS)
> > +ff_me_search_ds(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_HEXBS)
> > +ff_me_search_hexbs(me_ctx, x_mb, y_mb, mv);
> > +else if (mi_ctx->me_method == ME_METHOD_EPZS) {
>
> This calls for a switch/case. (There was another place in the code
> which I haven't quoted.) Readability wouldn't improve significantly,
> but the advantage is that the compiler can check whether you forgot to
> add code for certain values.
>

will do. can you tell which is faster?


>
> > +#if CACHE_MVS
> Will this stay in?
>

it will be removed.


>
> > +#if !CACHE_MVS
> Ditto.
>
> Sorry if 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Paul B Mahol
On 8/18/16, Paul B Mahol  wrote:
> On 8/18/16, Davinder Singh  wrote:
>> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:
>>
>>> [...]
>>>
>>
>> i tried to modify EPZS. i removed the early termination threshold which
>> skip some predictors :-/
>> new score:
>> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
>> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
>>
>> original epzs:
>> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
>> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
>>
>> epzs uses small diamond pattern. a new pattern could also help.
>>
>> Please post patch like last time.
>>>
>>
>> latest patch attached.
>>
>
> UMH ME is still somehow buggy.
>
> EPZS seems good, great work!

Actually after second look EPZS is not much better than UMH here.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Paul B Mahol
On 8/18/16, Davinder Singh  wrote:
> On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:
>
>> [...]
>>
>
> i tried to modify EPZS. i removed the early termination threshold which
> skip some predictors :-/
> new score:
> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200
>
> original epzs:
> $ tiny_psnr 60_source_2.yuv 60_bbb.yuv
> stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200
>
> epzs uses small diamond pattern. a new pattern could also help.
>
> Please post patch like last time.
>>
>
> latest patch attached.
>

UMH ME is still somehow buggy.

EPZS seems good, great work!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Moritz Barsnick
On Thu, Aug 18, 2016 at 19:26:39 +, Davinder Singh wrote:

> +@table @option
> +@item algo
> +Set the algorithm to be used. Accepts one of the following values:
> +
> +@table @samp
> +@item ebma
> +Exhaustive block matching algorithm.
> +@end table
> +Default value is @samp{ebma}.
[...]
> +{ "method", "specify motion estimation method", OFFSET(method), 
> AV_OPT_TYPE_INT, {.i64 = ME_METHOD_ESA}, ME_METHOD_ESA, ME_METHOD_UMH, FLAGS, 
> "method" },
> +CONST("esa",   "exhaustive search",  ME_METHOD_ESA,  
>  "method"),
> +CONST("tss",   "three step search",  ME_METHOD_TSS,  
>  "method"),
> +CONST("tdls",  "two dimensional logarithmic search", ME_METHOD_TDLS, 
>  "method"),
> +CONST("ntss",  "new three step search",  ME_METHOD_NTSS, 
>  "method"),
> +CONST("fss",   "four step search",   ME_METHOD_FSS,  
>  "method"),
> +CONST("ds","diamond search", ME_METHOD_DS,   
>  "method"),
> +CONST("hexbs", "hexagon-based search",   
> ME_METHOD_HEXBS, "method"),
> +CONST("epzs",  "enhanced predictive zonal search",   ME_METHOD_EPZS, 
>  "method"),
> +CONST("umh",   "uneven multi-hexagon search",ME_METHOD_UMH,  
>  "method"),

Documentation mismatches implementation. I think you forgot to adapt
the former to your modifications.
a) It's not "algo", it's "method".
b) Default is "esa", not the non-existent "ebma".
c) You should actually list all possible values.

Furthermore, documentation for minterpolate is missing.


> +#define COST_MV(x, y)\
> +{\
> +cost = me_ctx->get_cost(me_ctx, x_mb, y_mb, x, y);\
> +if (cost < cost_min) {\
> +cost_min = cost;\
> +mv[0] = x;\
> +mv[1] = y;\
> +}\
> +}

The recommendation for function macros is to wrap the definition into a
"do { } while (0)". You do do that in other places.

> +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))

Why not
   if (cost_min != me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb))
??

> +if (!(cost_min = me_ctx->get_cost(me_ctx, x_mb, y_mb, x_mb, y_mb)))
> +return cost_min;

Same here and many other places. "!=" is a valid operator. ;)

> +#if 1
> +for (i = 0; i < 8; i++)
> +COST_P_MV(x + dia[i][0], y + dia[i][1]);
> +#else

These checks will disappear in the final version?


> +{ "fps", "specify the frame rate", OFFSET(frame_rate), 
> AV_OPT_TYPE_RATIONAL, {.dbl = 60}, 0, INT_MAX, FLAGS },

Could you handle this with an AV_OPT_TYPE_VIDEO_RATE, made specially
for cases such as this?

> +{ "mb_size", "specify the macroblock size", OFFSET(mb_size), 
> AV_OPT_TYPE_INT, {.i64 = 16}, 4, 16, FLAGS },
> +{ "search_param", "specify search parameter", OFFSET(search_param), 
> AV_OPT_TYPE_INT, {.i64 = 32}, 4, INT_MAX, FLAGS },

You can drop the "specify the" part. Every option lets you specify
something. ;-)

> +//int term = (mv_x * mv_x + mv_y * mv_y);
> +//int term = (FFABS(mv_x - me_ctx->pred_x) + FFABS(mv_y - me_ctx->pred_y));
> +//fprintf(stdout, "sbad: %llu, term: %d\n", sbad, term);
> +return sbad;// + term;

Needs to be fixed?

> +avcodec_get_chroma_sub_sample(inlink->format, _ctx->chroma_h_shift, 
> _ctx->chroma_v_shift); //TODO remove

To do.

> +if (!(mi_ctx->int_blocks = av_mallocz_array(mi_ctx->b_count, 
> sizeof(Block

!=

> +if (mi_ctx->me_method == ME_METHOD_ESA)
> +ff_me_search_esa(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_TSS)
> +ff_me_search_tss(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_TDLS)
> +ff_me_search_tdls(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_NTSS)
> +ff_me_search_ntss(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_FSS)
> +ff_me_search_fss(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_DS)
> +ff_me_search_ds(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_HEXBS)
> +ff_me_search_hexbs(me_ctx, x_mb, y_mb, mv);
> +else if (mi_ctx->me_method == ME_METHOD_EPZS) {

This calls for a switch/case. (There was another place in the code
which I haven't quoted.) Readability wouldn't improve significantly,
but the advantage is that the compiler can check whether you forgot to
add code for certain values.

> +#if CACHE_MVS
Will this stay in?

> +#if !CACHE_MVS
Ditto.

Sorry if I missed the fact that this patch isn't ready for production
yet, and I'm nitpicking lots of stuff.

Moritz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Davinder Singh
On Thu, Aug 18, 2016 at 11:52 PM Paul B Mahol  wrote:

> [...]
>

i tried to modify EPZS. i removed the early termination threshold which
skip some predictors :-/
new score:
$ tiny_psnr 60_source_2.yuv 60_bbb.yuv
stddev: 1.02 PSNR: 47.94 MAXDIFF: 186 bytes:476928000/474163200

original epzs:
$ tiny_psnr 60_source_2.yuv 60_bbb.yuv
stddev: 1.07 PSNR: 47.51 MAXDIFF: 186 bytes:476928000/474163200

epzs uses small diamond pattern. a new pattern could also help.

Please post patch like last time.
>

latest patch attached.


0001-motion-estimation-and-interpolation-filters-v2T.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Paul B Mahol
On 8/18/16, Davinder Singh  wrote:
> On Tue, Aug 16, 2016 at 1:47 AM Paul B Mahol  wrote:
>
>> [...]
>
>
> hi,
>
> made EPZS work correctly:
> https://github.com/dsmudhar/FFmpeg/commit/0fc7a5490252a7f9832775b2773b35a42025553b
> also reduced no of repeated predictors which increased the speed also.
>
>
>> What about artifacts with UMH?
>> See for example this sample:
>> https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m
>
>
> EPZS fixed artifacts in this video, and also in other videos where motion
> is fast, I can use p = 32 and quality was improved without introducing more
> artifacts as in UMH.

Please post patch like last time.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-18 Thread Davinder Singh
On Tue, Aug 16, 2016 at 1:47 AM Paul B Mahol  wrote:

> [...]


hi,

made EPZS work correctly:
https://github.com/dsmudhar/FFmpeg/commit/0fc7a5490252a7f9832775b2773b35a42025553b
also reduced no of repeated predictors which increased the speed also.


> What about artifacts with UMH?
> See for example this sample:
> https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m


EPZS fixed artifacts in this video, and also in other videos where motion
is fast, I can use p = 32 and quality was improved without introducing more
artifacts as in UMH.

thanks :)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-16 Thread Davinder Singh
On Tue, Aug 16, 2016 at 5:46 PM Michael Niedermayer 
wrote:

> [...]
>
> not sure i suggested it previously already but you can add yourself
> to the MAINTAINERs file if you want to maintain / continue working on
> the code after GSoC
>

i surely will.

thanks
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-16 Thread Michael Niedermayer
On Sat, Aug 13, 2016 at 12:18:56PM +, Davinder Singh wrote:
> On Thu, Aug 11, 2016 at 12:10 AM Davinder Singh  wrote:
> 
> > [...]
> >
> > latest changes:
> https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_minterpolate.c
> uses shared motion estimation code now, added options, improved vsbmc
> i tried to make filter options as flexible as possible so that multiple
> algorithms are be supported.
> 
> @Ronald:
> have a look:
> https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/motion_estimation.c
> i think if penalty factor can be moved into cost function, motion
> estimation can be shared with encoders. we can start work on this after
> GSoC?
> 
> TODO:
> frame border motion estimation.
> add scene change threshold. roughness check doesn't work so well and
> introduce artifacts.
> add docs.
> 
> 
> > here's another idea: dynamic block size selection for MC-FRUC
> > since it's not video encoding, using 16x16 block with fixed search window
> > may not work same for all resolution videos. what if we automatic resize
> > block depending on resolution? like if 16x16, P=20 works fine for 1280x720
> > video, we can scale it according to width, e.g for 1920x1080 which 1.5x
> > 1280, we use 24x24 block and also scale P accordingly? i haven't tested it
> > yet though.
> >
> 
> i tested this. quality was improved with 1080p but not with smaller
> resolution.
> 
> I tried to scale best settings of 720p. UMH. 1080p same video.
> scale nothing: mb16 p18
> stddev:1.16 PSNR: 46.80 MAXDIFF:  197 bytes:1085529600/1073088000
> scale search window: mb16, p27
> stddev:1.21 PSNR: 46.47 MAXDIFF:  193 bytes:1085529600/1073088000
> scale both: mb24 p18
> stddev:1.14 PSNR: 46.93 MAXDIFF:  181 bytes:1085529600/1073088000
> 
> ESA
> mb16 p16:
> stddev:1.18 PSNR: 46.65 MAXDIFF:  181 bytes:1085529600/1073088000
> mb24 p24:
> stddev:1.16 PSNR: 46.77 MAXDIFF:  181 bytes:1085529600/1073088000
> 
> 640p ESA
> m16 p16:
> stddev:1.01 PSNR: 47.97 MAXDIFF:  160 bytes:119577600/118540800
> scale p: mb16 p8:
> stddev:1.02 PSNR: 47.95 MAXDIFF:  148 bytes:119577600/118540800
> scale both: m8 p8:
> stddev:1.05 PSNR: 47.63 MAXDIFF:  187 bytes:119577600/118540800
> 
> i think quality can be further improved, generated test window weights were
> not perfect.
> should i keep this feature? since block-size won't be log2 int, that will
> break vsbmc which use quadtree division for smaller blocks.
> 
> 
> > [1]: JVT-F017.pdf by Z Chen 
> >
> 
> 
> On Thu, Aug 11, 2016 at 9:09 PM Paul B Mahol  wrote:
> 
> > Could you please squash your commits and attach patches that add
> > vf_mestimate
> > and vf_minterpolate filters?
> >
> 
> patch attached.

>  doc/filters.texi|   25 
>  libavfilter/Makefile|2 
>  libavfilter/allfilters.c|2 
>  libavfilter/motion_estimation.c |  451 +
>  libavfilter/motion_estimation.h |   76 ++
>  libavfilter/vf_mestimate.c  |  376 +++
>  libavfilter/vf_minterpolate.c   | 1332 
> 

not sure i suggested it previously already but you can add yourself
to the MAINTAINERs file if you want to maintain / continue working on
the code after GSoC

Thanks

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Opposition brings concord. Out of discord comes the fairest harmony.
-- Heraclitus


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Tue, Aug 16, 2016, 1:47 AM Paul B Mahol  wrote:

> On 8/15/16, Davinder Singh  wrote:
> > On Tue, Aug 16, 2016 at 1:40 AM Davinder Singh 
> wrote:
> >
> >> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:
> >>
> >>> [...]
> >>
> >>
> >>> Also, why is there no code for scene change detection?
> >>> If scene changes abruptly it will give bad frame.
> >>>
> >>
> >> added scene change detection from framerate filter:
> >>
> >>
> https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78
> >>
> >> On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer
> >>  wrote:
> >>
> >>> [...]
> >>
> >>
> >>> the motion estimation should already produce a "matching score" of some
> >>> kind for every block, its sum is probably a good indication how
> >>> similar frames are
> >>> the sum probably would need to be compared to some meassure of variance
> >>> for the frame so near black frames dont get better matches
> >>> a bit like a correlation coefficient
> >>> you can also look at
> >>> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
> >>>
> >>
> >> i also tested comparing sum of SBAD score but it gave me mostly false
> >> detection.
> >> vf_framerate one works even with dark scenes (i reduced threshold from 7
> >> to 5) correctly, though it doesn't consider any motion.
> >>
> >
> > i currently duplicate the frames for one loop of interpolations (until
> next
> > frame arrives), blending can also be done.
> >
> https://github.com/dsmudhar/FFmpeg/blob/1ad01c530569dfa1f085a31b6435597a97001a78/libavfilter/vf_minterpolate.c#L1101
> > which one you think would be better? frame dup seems perfect to me
>
> What about artifacts with UMH?
> See for example this sample:
> https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m


Trying to improve the quality of frames. The "smoothness" term, suggested
by Michael, should reduce the artifacts.

>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Paul B Mahol
On 8/15/16, Davinder Singh  wrote:
> On Tue, Aug 16, 2016 at 1:40 AM Davinder Singh  wrote:
>
>> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:
>>
>>> [...]
>>
>>
>>> Also, why is there no code for scene change detection?
>>> If scene changes abruptly it will give bad frame.
>>>
>>
>> added scene change detection from framerate filter:
>>
>> https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78
>>
>> On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer
>>  wrote:
>>
>>> [...]
>>
>>
>>> the motion estimation should already produce a "matching score" of some
>>> kind for every block, its sum is probably a good indication how
>>> similar frames are
>>> the sum probably would need to be compared to some meassure of variance
>>> for the frame so near black frames dont get better matches
>>> a bit like a correlation coefficient
>>> you can also look at
>>> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
>>>
>>
>> i also tested comparing sum of SBAD score but it gave me mostly false
>> detection.
>> vf_framerate one works even with dark scenes (i reduced threshold from 7
>> to 5) correctly, though it doesn't consider any motion.
>>
>
> i currently duplicate the frames for one loop of interpolations (until next
> frame arrives), blending can also be done.
> https://github.com/dsmudhar/FFmpeg/blob/1ad01c530569dfa1f085a31b6435597a97001a78/libavfilter/vf_minterpolate.c#L1101
> which one you think would be better? frame dup seems perfect to me

What about artifacts with UMH?
See for example this sample:
https://media.xiph.org/video/derf/y4m/in_to_tree_420_720p50.y4m
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Tue, Aug 16, 2016 at 1:40 AM Davinder Singh  wrote:

> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:
>
>> [...]
>
>
>> Also, why is there no code for scene change detection?
>> If scene changes abruptly it will give bad frame.
>>
>
> added scene change detection from framerate filter:
>
> https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78
>
> On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer
>  wrote:
>
>> [...]
>
>
>> the motion estimation should already produce a "matching score" of some
>> kind for every block, its sum is probably a good indication how
>> similar frames are
>> the sum probably would need to be compared to some meassure of variance
>> for the frame so near black frames dont get better matches
>> a bit like a correlation coefficient
>> you can also look at
>> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
>>
>
> i also tested comparing sum of SBAD score but it gave me mostly false
> detection.
> vf_framerate one works even with dark scenes (i reduced threshold from 7
> to 5) correctly, though it doesn't consider any motion.
>

i currently duplicate the frames for one loop of interpolations (until next
frame arrives), blending can also be done.
https://github.com/dsmudhar/FFmpeg/blob/1ad01c530569dfa1f085a31b6435597a97001a78/libavfilter/vf_minterpolate.c#L1101
which one you think would be better? frame dup seems perfect to me
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-15 Thread Davinder Singh
On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:

> [...]
> Also, why is there no code for scene change detection?
> If scene changes abruptly it will give bad frame.
>

added scene change detection from framerate filter:
https://github.com/dsmudhar/FFmpeg/commit/1ad01c530569dfa1f085a31b6435597a97001a78

On Sat, Aug 13, 2016 at 10:41 PM Michael Niedermayer 
wrote:

> [...]
> the motion estimation should already produce a "matching score" of some
> kind for every block, its sum is probably a good indication how
> similar frames are
> the sum probably would need to be compared to some meassure of variance
> for the frame so near black frames dont get better matches
> a bit like a correlation coefficient
> you can also look at
> git grep scene libavcodec/mpegvideo* libavcodec/motion_es*
>

i also tested comparing sum of SBAD score but it gave me mostly false
detection.
vf_framerate one works even with dark scenes (i reduced threshold from 7 to
5) correctly, though it doesn't consider any motion.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Michael Niedermayer
On Sat, Aug 13, 2016 at 03:51:17PM +, Davinder Singh wrote:
> On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:
> 
> > On 8/13/16, Paul B Mahol  wrote:
> > > On 8/13/16, Davinder Singh  wrote:
> > >>
> > >> patch attached.
> > >>
> > >
> > > Please add EPZS to minterpolate.
> > >
> >
> > Also, why is there no code for scene change detection?
> 
> If scene changes abruptly it will give bad frame.
> >
> 

> none of paper had scene change detection. any idea how can i add it?

the motion estimation should already produce a "matching score" of some
kind for every block, its sum is probably a good indication how
similar frames are
the sum probably would need to be compared to some meassure of variance
for the frame so near black frames dont get better matches
a bit like a correlation coefficient
you can also look at
git grep scene libavcodec/mpegvideo* libavcodec/motion_es*

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Why not whip the teacher when the pupil misbehaves? -- Diogenes of Sinope


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Davinder Singh
On Sat, Aug 13, 2016 at 8:05 PM Paul B Mahol  wrote:

> On 8/13/16, Paul B Mahol  wrote:
> > On 8/13/16, Davinder Singh  wrote:
> >>
> >> patch attached.
> >>
> >
> > Please add EPZS to minterpolate.
> >
>
> Also, why is there no code for scene change detection?

If scene changes abruptly it will give bad frame.
>

none of paper had scene change detection. any idea how can i add it?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Davinder Singh
On Sat, Aug 13, 2016 at 7:28 PM Paul B Mahol  wrote:

> On 8/13/16, Davinder Singh  wrote:
> >
> > patch attached.
> >
>
> Please add EPZS to minterpolate.
>

added.
https://github.com/dsmudhar/FFmpeg/commit/1ad40c3f405625075b93dde71a749593dc64f0e3


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Paul B Mahol
On 8/13/16, Paul B Mahol  wrote:
> On 8/13/16, Davinder Singh  wrote:
>>
>> patch attached.
>>
>
> Please add EPZS to minterpolate.
>

Also, why is there no code for scene change detection?
If scene changes abruptly it will give bad frame.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-13 Thread Paul B Mahol
On 8/13/16, Davinder Singh  wrote:
>
> patch attached.
>

Please add EPZS to minterpolate.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-11 Thread Davinder Singh
On Thu, Aug 11, 2016 at 9:09 PM Paul B Mahol  wrote:

> On 8/10/16, Davinder Singh  wrote:
> > On Mon, Jul 25, 2016 at 9:35 AM Davinder Singh 
> wrote:
> >
> >> https://github.com/dsmudhar/FFmpeg/commits/dev
> >>
> >> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
> >> (which uses bilateral motion estimation) v/s motion vectors exported by
> >> mEstimate filter:
> >>
> >> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
> >> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
> >>
> >> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
> >> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200
> >>
> >> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
> >>
> >> Compared to simple OBMC which over-smooth edges, Objects clustering and
> >> Adaptive OBMC makes the edges crisp but also introduce blocking
> artifacts
> >> where MVs are bad (with default search window = 7). But I think it’s
> ESA’s
> >> fault. The paper doesn’t specify which motion estimation method they
> used;
> >> I have been using ESA. I think quality can be further improved with
> EPZS,
> >> which I'm going to implement.
> >>
> >> I also tried to tweak VS-BMC (Variable size block motion compensation)
> >> which reduced the blocking artifacts in VS-BMC area. Had to do
> experiments
> >> a lot, more to be done.
> >>
> >> mEstimate filter (ESA) + Simple OBMC:
> >> http://www.mediafire.com/?3b8j1zj1lsuw979
> >> Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu
> >>
> >>
> >> Regards,
> >> DSM_
> >>
> >
> >
> >
> > implemented all other modern fast ME algorithms:
> > https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_mestimate.c
>
> Could you please squash your commits and attach patches that add
> vf_mestimate
> and vf_minterpolate filters?
>

will send very soon.


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-11 Thread Paul B Mahol
On 8/10/16, Davinder Singh  wrote:
> On Mon, Jul 25, 2016 at 9:35 AM Davinder Singh  wrote:
>
>> https://github.com/dsmudhar/FFmpeg/commits/dev
>>
>> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
>> (which uses bilateral motion estimation) v/s motion vectors exported by
>> mEstimate filter:
>>
>> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
>> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
>>
>> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
>> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200
>>
>> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
>>
>> Compared to simple OBMC which over-smooth edges, Objects clustering and
>> Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
>> where MVs are bad (with default search window = 7). But I think it’s ESA’s
>> fault. The paper doesn’t specify which motion estimation method they used;
>> I have been using ESA. I think quality can be further improved with EPZS,
>> which I'm going to implement.
>>
>> I also tried to tweak VS-BMC (Variable size block motion compensation)
>> which reduced the blocking artifacts in VS-BMC area. Had to do experiments
>> a lot, more to be done.
>>
>> mEstimate filter (ESA) + Simple OBMC:
>> http://www.mediafire.com/?3b8j1zj1lsuw979
>> Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu
>>
>>
>> Regards,
>> DSM_
>>
>
>
>
> implemented all other modern fast ME algorithms:
> https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_mestimate.c

Could you please squash your commits and attach patches that add vf_mestimate
and vf_minterpolate filters?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-08-10 Thread Davinder Singh
On Mon, Jul 25, 2016 at 9:35 AM Davinder Singh  wrote:

> https://github.com/dsmudhar/FFmpeg/commits/dev
>
> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
> (which uses bilateral motion estimation) v/s motion vectors exported by
> mEstimate filter:
>
> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
>
> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200
>
> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
>
> Compared to simple OBMC which over-smooth edges, Objects clustering and
> Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
> where MVs are bad (with default search window = 7). But I think it’s ESA’s
> fault. The paper doesn’t specify which motion estimation method they used;
> I have been using ESA. I think quality can be further improved with EPZS,
> which I'm going to implement.
>
> I also tried to tweak VS-BMC (Variable size block motion compensation)
> which reduced the blocking artifacts in VS-BMC area. Had to do experiments
> a lot, more to be done.
>
> mEstimate filter (ESA) + Simple OBMC:
> http://www.mediafire.com/?3b8j1zj1lsuw979
> Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu
>
>
> Regards,
> DSM_
>



implemented all other modern fast ME algorithms:
https://github.com/dsmudhar/FFmpeg/blob/dev/libavfilter/vf_mestimate.c

quality is further improved with UMH which uses prediction [1]:
$ ../../../tiny_psnr 60_source_2.yuv 60_wtf.yuv
stddev: 1.05 PSNR: 47.65 MAXDIFF: 178 bytes:476928000/474163200
(search window = 18)

only problem is when the motion is too fast in some movie scenes (e.g. far
objects in background when camera is rotating) and bigger than the search
window, there will be artifacts.

good thing with predictive UMH search (compared to ESA) is we can use
bigger search window; with P = around 20, it removed all those artifacts
for which the search window wasn't large enough.

but using too big search window reduces the quality.


here's another idea: dynamic block size selection for MC-FRUC
since it's not video encoding, using 16x16 block with fixed search window
may not work same for all resolution videos. what if we automatic resize
block depending on resolution? like if 16x16, P=20 works fine for 1280x720
video, we can scale it according to width, e.g for 1920x1080 which 1.5x
1280, we use 24x24 block and also scale P accordingly? i haven't tested it
yet though.

[1]: JVT-F017.pdf by Z Chen 


DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-27 Thread Davinder Singh
On Wed, Jul 27, 2016 at 4:50 PM Michael Niedermayer 
wrote:

> On Tue, Jul 26, 2016 at 07:30:14PM +, Davinder Singh wrote:
> > hi
> >
> > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje 
> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > >  > > > wrote:
> > >
> > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > >
> > >
> > > So, correct me if I'm wrong, but it seems the complete ME code
> currently
> > > lives inside the filter. I wonder if that is the best way forward. I
> > > thought the idea was to split out the ME code into its own module and
> share
> > > it between various filters and the relevant encoders without a strict
> > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > anything like that?
> > >
> > > Ronald
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> >
> > The code is almost ready to be shared, I just didn't move that yet. That
> > makes changes difficult. mInterpolate will use those functions (which are
> > currently in mEstimate) to find true motion. My plan is to move that code
> > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> shared
> > between multiple filters. Since that is general ME, I think it can be
> used
> > with encoding (with some changes). So, should I move it to libavutil
> > instead?
>
> one thing thats important,
> independant of where its moved, the interface between libs is part
> of the public ABI of that lib and thus cannot be changed once it is
> added. That is new functions can be added but they
> cannot be removed nor their interface changed once added until the
> next major version bump (which might occur once a year)
>
> its important to keep this in mind when designing inter lib interfaces


 I'll keep this in mind.



On Wed, Jul 27, 2016 at 6:12 PM Ronald S. Bultje  wrote:

> Hi,
>
> [...]
>
>
> You could - to address this - design it as if it lived in libavutil, but
> (until it actually is used in libavcodec) keep it in libavfilter with a ff_
> function prefix to ensure functions are not exported from the lib.
>
> Once libavcodec uses it, move it to libavutil and change ff_ to av(priv?)_
> prefix so it's exported.


I was thinking something like that! Will do.

Thanks!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-27 Thread Michael Niedermayer
On Wed, Jul 27, 2016 at 08:41:28AM -0400, Ronald S. Bultje wrote:
> Hi,
> 
> On Wed, Jul 27, 2016 at 7:20 AM, Michael Niedermayer  > wrote:
> 
> > On Tue, Jul 26, 2016 at 07:30:14PM +, Davinder Singh wrote:
> > > hi
> > >
> > > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > > >  > > > > wrote:
> > > >
> > > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > > >
> > > >
> > > > So, correct me if I'm wrong, but it seems the complete ME code
> > currently
> > > > lives inside the filter. I wonder if that is the best way forward. I
> > > > thought the idea was to split out the ME code into its own module and
> > share
> > > > it between various filters and the relevant encoders without a strict
> > > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > > anything like that?
> > > >
> > > > Ronald
> > > > ___
> > > > ffmpeg-devel mailing list
> > > > ffmpeg-devel@ffmpeg.org
> > > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> > >
> > >
> > > The code is almost ready to be shared, I just didn't move that yet. That
> > > makes changes difficult. mInterpolate will use those functions (which are
> > > currently in mEstimate) to find true motion. My plan is to move that code
> > > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> > shared
> > > between multiple filters. Since that is general ME, I think it can be
> > used
> > > with encoding (with some changes). So, should I move it to libavutil
> > > instead?
> >
> > one thing thats important,
> > independant of where its moved, the interface between libs is part
> > of the public ABI of that lib and thus cannot be changed once it is
> > added. That is new functions can be added but they
> > cannot be removed nor their interface changed once added until the
> > next major version bump (which might occur once a year)
> >
> > its important to keep this in mind when designing inter lib interfaces
> 
> 
> You could - to address this - design it as if it lived in libavutil, but
> (until it actually is used in libavcodec) keep it in libavfilter with a ff_
> function prefix to ensure functions are not exported from the lib.
> 
> Once libavcodec uses it, move it to libavutil and change ff_ to av(priv?)_
> prefix so it's exported.

i like this idea alot!

thanks!

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Asymptotically faster algorithms should always be preferred if you have
asymptotical amounts of data


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-27 Thread Ronald S. Bultje
Hi,

On Wed, Jul 27, 2016 at 7:20 AM, Michael Niedermayer  wrote:

> On Tue, Jul 26, 2016 at 07:30:14PM +, Davinder Singh wrote:
> > hi
> >
> > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje 
> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > >  > > > wrote:
> > >
> > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > >
> > >
> > > So, correct me if I'm wrong, but it seems the complete ME code
> currently
> > > lives inside the filter. I wonder if that is the best way forward. I
> > > thought the idea was to split out the ME code into its own module and
> share
> > > it between various filters and the relevant encoders without a strict
> > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > anything like that?
> > >
> > > Ronald
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> >
> > The code is almost ready to be shared, I just didn't move that yet. That
> > makes changes difficult. mInterpolate will use those functions (which are
> > currently in mEstimate) to find true motion. My plan is to move that code
> > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> shared
> > between multiple filters. Since that is general ME, I think it can be
> used
> > with encoding (with some changes). So, should I move it to libavutil
> > instead?
>
> one thing thats important,
> independant of where its moved, the interface between libs is part
> of the public ABI of that lib and thus cannot be changed once it is
> added. That is new functions can be added but they
> cannot be removed nor their interface changed once added until the
> next major version bump (which might occur once a year)
>
> its important to keep this in mind when designing inter lib interfaces


You could - to address this - design it as if it lived in libavutil, but
(until it actually is used in libavcodec) keep it in libavfilter with a ff_
function prefix to ensure functions are not exported from the lib.

Once libavcodec uses it, move it to libavutil and change ff_ to av(priv?)_
prefix so it's exported.

Ronald
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-27 Thread Michael Niedermayer
On Tue, Jul 26, 2016 at 07:30:14PM +, Davinder Singh wrote:
> hi
> 
> On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje  wrote:
> 
> > Hi,
> >
> > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> >  > > wrote:
> >
> > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> >
> >
> > So, correct me if I'm wrong, but it seems the complete ME code currently
> > lives inside the filter. I wonder if that is the best way forward. I
> > thought the idea was to split out the ME code into its own module and share
> > it between various filters and the relevant encoders without a strict
> > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > anything like that?
> >
> > Ronald
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> 
> The code is almost ready to be shared, I just didn't move that yet. That
> makes changes difficult. mInterpolate will use those functions (which are
> currently in mEstimate) to find true motion. My plan is to move that code
> out of mEstimate to say, libavfilter/motion_estimation.c and can be shared
> between multiple filters. Since that is general ME, I think it can be used
> with encoding (with some changes). So, should I move it to libavutil
> instead?

one thing thats important,
independant of where its moved, the interface between libs is part
of the public ABI of that lib and thus cannot be changed once it is
added. That is new functions can be added but they
cannot be removed nor their interface changed once added until the
next major version bump (which might occur once a year)

its important to keep this in mind when designing inter lib interfaces

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Awnsering whenever a program halts or runs forever is
On a turing machine, in general impossible (turings halting problem).
On any real computer, always possible as a real computer has a finite number
of states N, and will either halt in less than N cycles or never halt.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-26 Thread Davinder Singh
On Wed, Jul 27, 2016 at 1:06 AM Ronald S. Bultje  wrote:

> Hi,
>
> On Tue, Jul 26, 2016 at 3:30 PM, Davinder Singh 
> wrote:
>
> > hi
> >
> > On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje 
> > wrote:
> >
> > > Hi,
> > >
> > > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> > >  > > > wrote:
> > >
> > > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> > >
> > >
> > > So, correct me if I'm wrong, but it seems the complete ME code
> currently
> > > lives inside the filter. I wonder if that is the best way forward. I
> > > thought the idea was to split out the ME code into its own module and
> > share
> > > it between various filters and the relevant encoders without a strict
> > > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > > anything like that?
> > >
> > > Ronald
> > > ___
> > > ffmpeg-devel mailing list
> > > ffmpeg-devel@ffmpeg.org
> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> >
> > The code is almost ready to be shared, I just didn't move that yet. That
> > makes changes difficult. mInterpolate will use those functions (which are
> > currently in mEstimate) to find true motion. My plan is to move that code
> > out of mEstimate to say, libavfilter/motion_estimation.c and can be
> shared
> > between multiple filters. Since that is general ME, I think it can be
> used
> > with encoding (with some changes). So, should I move it to libavutil
> > instead?
>
>
> I have no strong opinion on where it lives, I'd say libavcodec since we
> already have some lavfilters depending on lavcodec, but if you prefer
> lavutil that's fine also. As long as the code itself is shared in the final
> product, it's good with me.
>

Alright, I'll go with libavutil if that's okay with everyone.

Thanks!
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-26 Thread Ronald S. Bultje
Hi,

On Tue, Jul 26, 2016 at 3:30 PM, Davinder Singh  wrote:

> hi
>
> On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje 
> wrote:
>
> > Hi,
> >
> > On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
> >  > > wrote:
> >
> > > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > > https://github.com/dsmudhar/FFmpeg/commits/dev
> >
> >
> > So, correct me if I'm wrong, but it seems the complete ME code currently
> > lives inside the filter. I wonder if that is the best way forward. I
> > thought the idea was to split out the ME code into its own module and
> share
> > it between various filters and the relevant encoders without a strict
> > dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> > anything like that?
> >
> > Ronald
> > ___
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
>
> The code is almost ready to be shared, I just didn't move that yet. That
> makes changes difficult. mInterpolate will use those functions (which are
> currently in mEstimate) to find true motion. My plan is to move that code
> out of mEstimate to say, libavfilter/motion_estimation.c and can be shared
> between multiple filters. Since that is general ME, I think it can be used
> with encoding (with some changes). So, should I move it to libavutil
> instead?


I have no strong opinion on where it lives, I'd say libavcodec since we
already have some lavfilters depending on lavcodec, but if you prefer
lavutil that's fine also. As long as the code itself is shared in the final
product, it's good with me.

Thanks!
Ronald
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-26 Thread Davinder Singh
hi

On Mon, Jul 25, 2016 at 9:55 PM Ronald S. Bultje  wrote:

> Hi,
>
> On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer
>  > wrote:
>
> > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > https://github.com/dsmudhar/FFmpeg/commits/dev
>
>
> So, correct me if I'm wrong, but it seems the complete ME code currently
> lives inside the filter. I wonder if that is the best way forward. I
> thought the idea was to split out the ME code into its own module and share
> it between various filters and the relevant encoders without a strict
> dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> anything like that?
>
> Ronald
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


The code is almost ready to be shared, I just didn't move that yet. That
makes changes difficult. mInterpolate will use those functions (which are
currently in mEstimate) to find true motion. My plan is to move that code
out of mEstimate to say, libavfilter/motion_estimation.c and can be shared
between multiple filters. Since that is general ME, I think it can be used
with encoding (with some changes). So, should I move it to libavutil
instead?


DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-25 Thread Michael Niedermayer
On Mon, Jul 25, 2016 at 12:25:07PM -0400, Ronald S. Bultje wrote:
> Hi,
> 
> On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer  > wrote:
> 
> > On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > > https://github.com/dsmudhar/FFmpeg/commits/dev
> 
> 
> So, correct me if I'm wrong, but it seems the complete ME code currently
> lives inside the filter. I wonder if that is the best way forward. I
> thought the idea was to split out the ME code into its own module and share
> it between various filters and the relevant encoders without a strict
> dependency on avfilter/avcodec, or more specifically, AVCodecContext or
> anything like that?

there is little overlap between ME used in encoders and ME used
for finding true motion
These are different problems unless one considers just some
initial motion estimation pass that just serves as a start point
(that initial start pass code could be shared in principle ...)

For encoding you want to minimize the distortion after the decoder
and the bitrate, and you are restricted by the structure of the
codec like 4x4, 8x8 and 16x16 blocks with 1/4 translational motion
for example


For finding true motion for the purpose of filtering there is no
bitrate, there is no restrictions on the segmentation, precission,
type of motion or the number of motion vectors per pixel. distortion
does not even matter all that much, the vectors must match the
true motion of the objects and if you cant achive that making the
vector field smooth temporally and spatially is much more important
because having all vectors wrong in a smooth way looks much better
than having a tiny number of vectors wrong even if these match very
well the pixels
simple frame duplication is basically all (0,0) vectors from the
past frame and that looks ok


just think about what happens if you interpolate with all prefectly
optimal vectors per pixel but one single 16x16 block being entirely
wrong. Such a frame would have a randomly displaced 16x16 block in each
frame thats much worse than all (0,0) vectors
also that bad block could be 100% matching pixel wise between
frame 1 and 3 and still be totally wrong at the interpolated
position in frame 2 (which we create)
this is the big issue i had with mcfps, it all looks good except the
occasional vectors which match ok pixel wise but are just really
not right vector wise, like someones finger moving to his nose
because that matches well pixel wise ...
(it wasnt finger and nose but random blocks moving around very
 differntly from the actual motion which looks really bad)

Also another issue with code sharing ATM is that it would make
tuning the code harder, the ME code used for the encoders is quite
restricted in what it supports and its optimized for that.
IIUC for the filtering Davinder is still testing and tuning things,
i think thats much easier without draging
"shared with encoders and highly optimized for that" code along

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I have never wished to cater to the crowd; for what I know they do not
approve, and what they approve I do not know. -- Epicurus


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-25 Thread Ronald S. Bultje
Hi,

On Mon, Jul 25, 2016 at 5:39 AM, Michael Niedermayer  wrote:

> On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> > https://github.com/dsmudhar/FFmpeg/commits/dev


So, correct me if I'm wrong, but it seems the complete ME code currently
lives inside the filter. I wonder if that is the best way forward. I
thought the idea was to split out the ME code into its own module and share
it between various filters and the relevant encoders without a strict
dependency on avfilter/avcodec, or more specifically, AVCodecContext or
anything like that?

Ronald
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-25 Thread Michael Niedermayer
On Mon, Jul 25, 2016 at 04:05:54AM +, Davinder Singh wrote:
> https://github.com/dsmudhar/FFmpeg/commits/dev
> 
> The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
> (which uses bilateral motion estimation) v/s motion vectors exported by

good!


> mEstimate filter:
> 
> $ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
> stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200
> 
> $ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
> stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200

how does this compare to vf_mcfps ?


> 
> Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5
> 
> Compared to simple OBMC which over-smooth edges, Objects clustering and
> Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
> where MVs are bad (with default search window = 7). But I think it’s ESA’s
> fault. The paper doesn’t specify which motion estimation method they used;
> I have been using ESA. I think quality can be further improved with EPZS,
> which I'm going to implement.

agree, good idea, also do you use some "smoothness" term ?
i mean in video encoding the difference between the predicted MV and
the actual one needs to be stored, the number of bits is scaled by
some constant and added in when testing MVs that resuslts in MVs being
closer together.
Something similar should help any motion estiation be that ESA or EPZS
i didnt check if you already use something like that but if not then
its probably a good idea to try, also as its quite simple

thanks!

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

There will always be a question for which you do not know the correct answer.


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-07-24 Thread Davinder Singh
https://github.com/dsmudhar/FFmpeg/commits/dev

The Paper 2 algorithm is complete. It seems good. If I compare Paper 2
(which uses bilateral motion estimation) v/s motion vectors exported by
mEstimate filter:

$ tiny_psnr 60_source_2.yuv 60_mest-esa+obmc.yuv
stddev:1.43 PSNR: 45.02 MAXDIFF:  174 bytes:476928000/474163200

$ tiny_psnr 60_source_2.yuv 60_paper2_aobmc+cls.yuv
stddev:1.25 PSNR: 46.18 MAXDIFF:  187 bytes:476928000/474163200

Frame comparison: http://www.mediafire.com/?qe7sc4o0s4hgug5

Compared to simple OBMC which over-smooth edges, Objects clustering and
Adaptive OBMC makes the edges crisp but also introduce blocking artifacts
where MVs are bad (with default search window = 7). But I think it’s ESA’s
fault. The paper doesn’t specify which motion estimation method they used;
I have been using ESA. I think quality can be further improved with EPZS,
which I'm going to implement.

I also tried to tweak VS-BMC (Variable size block motion compensation)
which reduced the blocking artifacts in VS-BMC area. Had to do experiments
a lot, more to be done.

mEstimate filter (ESA) + Simple OBMC:
http://www.mediafire.com/?3b8j1zj1lsuw979
Paper 2 (full): http://www.mediafire.com/?npbw1iv6tmxwvyu


Regards,
DSM_
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-22 Thread Davinder Singh
On Mon, Jun 20, 2016 at 4:33 PM Michael Niedermayer 
wrote:

> On Mon, Jun 20, 2016 at 09:54:15AM +, Davinder Singh wrote:
> > On Sat, Jun 18, 2016 at 3:16 AM Michael Niedermayer
> 
> > wrote:
> >
> > > On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> > > [...]
> > > > Yes, I did that, after understanding it completely. It now works
> with the
> > > > motion vectors generated by mEstimate filter. Now I’m trying to
> improve
> > > it
> > > > based on this paper: Overlapped Block Motion Compensation: An
> > > > Estimation-Theoretic Approach
> > >
> > > > <
> > >
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8359=rep1=pdf
> > > >
> > >
> > > this is 22 years old
> > >
> > >
> > > > and
> > > > this one: Window Motion Compensation
> > > > .Takes a lot of
> time
> > >
> > > this is 25 years old
> > >
> > > not saying old papers are bad, just that this represents the knowledge
> > > of 20 years ago
> > >
> > > also its important to keep in mind that blind block matching of any
> > > metric will not be enough. To find true motion the whole motion
> > > vector fields of multiple frames will need to be considered
> > >
> > > For example a ball thrown accross the field of view entering and
> > > exiting the picture needs to move smoothly and at the ends (in time)
> > > there are frames without the ball then a frame with the ball
> > > these 2 are not enough to interpolate the frames between as we have
> > > just one location where the ball is. With the next frames though
> > > we can find the motion trajectory of the ball and interpolate it end
> > > to end
> > >
> > > I think papers which work on problems like this and also interpolation
> > > of all the areas that end up overlapping and covering each other
> > > like the backgroud behind the ball in that example would be better
> > > starting points for implementing motion estiation because ultimatly
> > > that is the kind of ME code we would like to have.
> > > Block matching with various windows, OBMC, ... are all good but
> > > if in our example the vectors for the ball or background are off that
> > > will look rather bad with any motion compensation
> > > So trying to move a bit toward this would make sense but first
> > > having some motion estimation even really basic and dumb with
> > > mc working in a testable filter (pair) should probably be done.
> > > Iam just mentioning this as a bit of a preview of what i hope could
> > > eventually be implemented, maybe this would be after GSoC but its
> > > the kind of code needed to have really usable frame interpolation
> > >
> > >
> > >
> > > > reading them. I think we need to add new Raised Cosine window
> (weights)
> > > > along with Linear Window (currently implemented). What do you say?
> > >
> > > i dont know, the windows used in snow are already the best of several
> > > tried (for snow).
> > > no great gains will be found by changing the OBMC window from snow.
> > >
> > >
> > > >
> > > > Also making mInterpolate work with variable macroblock size MC. The
> > > current
> > > > interpolation works without half pel accuracy, though.
> > >
> > > mcfps has fully working 1/4 pel OBMC code, that should be fine to be
> > > used as is i think unless i miss something
> > >
> > > half pel is 20 years old, it is not usefull
> > > multiple block sizes on the MC side should not really matter ATM
> > > smaller blocks are a bit slower but first we should get the code
> > > working, then working with good quality and then working fast.
> > >
> > > multiple block sizes may be usefull for the estimation side if it
> > > improves estimation somehow.
> > >
> > > Can i see your current "work in progress" ?
> > >
> > >
> > > [...]
> > > > I’m moving estimation code to some new file motion_est.c file and the
> > > > methods are shared by both mEstimate and mInterpolate filters.
> mEstimate
> > > > store the MVs in frame’s side data for any other filter. Moreover,
> any
> > > > other filter if need post processing on MVs it can directly use the
> > > shared
> > > > methods. But, mInterpolate use them internally, no saving in
> sidedata,
> > > and
> > > > saving unnecessary processing.
> > >
> > > This design sounds good
> > >
> > >
> > > >
> > > >
> > > > Also, Paper [1] doesn’t uses window with OBMC at all. It just find
> normal
> > > > average without weight. Perhaps to compare papers I either need to
> add
> > > > multiple option for each setting or need to assign the algorithm as
> > > > researcher’s name in filter options.
> > >
> > >
> > >
> > Paper [1] and [2] uses functions or do post processing on motion vectors,
> > so needs fast ME algorithms, which currently I’m working on. [*M]
> >
> > Let me summarize the papers (from Email 1, this thread):
> >
> > Paper [1]: Zhai et al. (2005) A Low Complexity Motion Compensated Frame
> > Interpolation Method
> >
> > [Quote]
> > This paper 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-20 Thread Michael Niedermayer
On Mon, Jun 20, 2016 at 09:54:15AM +, Davinder Singh wrote:
> On Sat, Jun 18, 2016 at 3:16 AM Michael Niedermayer 
> wrote:
> 
> > On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> > [...]
> > > Yes, I did that, after understanding it completely. It now works with the
> > > motion vectors generated by mEstimate filter. Now I’m trying to improve
> > it
> > > based on this paper: Overlapped Block Motion Compensation: An
> > > Estimation-Theoretic Approach
> >
> > > <
> > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8359=rep1=pdf
> > >
> >
> > this is 22 years old
> >
> >
> > > and
> > > this one: Window Motion Compensation
> > > .Takes a lot of time
> >
> > this is 25 years old
> >
> > not saying old papers are bad, just that this represents the knowledge
> > of 20 years ago
> >
> > also its important to keep in mind that blind block matching of any
> > metric will not be enough. To find true motion the whole motion
> > vector fields of multiple frames will need to be considered
> >
> > For example a ball thrown accross the field of view entering and
> > exiting the picture needs to move smoothly and at the ends (in time)
> > there are frames without the ball then a frame with the ball
> > these 2 are not enough to interpolate the frames between as we have
> > just one location where the ball is. With the next frames though
> > we can find the motion trajectory of the ball and interpolate it end
> > to end
> >
> > I think papers which work on problems like this and also interpolation
> > of all the areas that end up overlapping and covering each other
> > like the backgroud behind the ball in that example would be better
> > starting points for implementing motion estiation because ultimatly
> > that is the kind of ME code we would like to have.
> > Block matching with various windows, OBMC, ... are all good but
> > if in our example the vectors for the ball or background are off that
> > will look rather bad with any motion compensation
> > So trying to move a bit toward this would make sense but first
> > having some motion estimation even really basic and dumb with
> > mc working in a testable filter (pair) should probably be done.
> > Iam just mentioning this as a bit of a preview of what i hope could
> > eventually be implemented, maybe this would be after GSoC but its
> > the kind of code needed to have really usable frame interpolation
> >
> >
> >
> > > reading them. I think we need to add new Raised Cosine window (weights)
> > > along with Linear Window (currently implemented). What do you say?
> >
> > i dont know, the windows used in snow are already the best of several
> > tried (for snow).
> > no great gains will be found by changing the OBMC window from snow.
> >
> >
> > >
> > > Also making mInterpolate work with variable macroblock size MC. The
> > current
> > > interpolation works without half pel accuracy, though.
> >
> > mcfps has fully working 1/4 pel OBMC code, that should be fine to be
> > used as is i think unless i miss something
> >
> > half pel is 20 years old, it is not usefull
> > multiple block sizes on the MC side should not really matter ATM
> > smaller blocks are a bit slower but first we should get the code
> > working, then working with good quality and then working fast.
> >
> > multiple block sizes may be usefull for the estimation side if it
> > improves estimation somehow.
> >
> > Can i see your current "work in progress" ?
> >
> >
> > [...]
> > > I’m moving estimation code to some new file motion_est.c file and the
> > > methods are shared by both mEstimate and mInterpolate filters. mEstimate
> > > store the MVs in frame’s side data for any other filter. Moreover, any
> > > other filter if need post processing on MVs it can directly use the
> > shared
> > > methods. But, mInterpolate use them internally, no saving in sidedata,
> > and
> > > saving unnecessary processing.
> >
> > This design sounds good
> >
> >
> > >
> > >
> > > Also, Paper [1] doesn’t uses window with OBMC at all. It just find normal
> > > average without weight. Perhaps to compare papers I either need to add
> > > multiple option for each setting or need to assign the algorithm as
> > > researcher’s name in filter options.
> >
> >
> >
> Paper [1] and [2] uses functions or do post processing on motion vectors,
> so needs fast ME algorithms, which currently I’m working on. [*M]
> 
> Let me summarize the papers (from Email 1, this thread):
> 
> Paper [1]: Zhai et al. (2005) A Low Complexity Motion Compensated Frame
> Interpolation Method
> 
> [Quote]
> This paper propose a MCFI method intended for real time processing. It
> first examines the motion vectors in the bitstream [*1]. 8x8 block size is
> used rather than 16x16 as in most cases; Using smaller block size leads to
> denser motion field, so neighboring MVs are more highly correlated, so
> prediction is better. To reduce complexity, MVs in 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-20 Thread Davinder Singh
On Sat, Jun 18, 2016 at 3:16 AM Michael Niedermayer 
wrote:

> On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> [...]
> > Yes, I did that, after understanding it completely. It now works with the
> > motion vectors generated by mEstimate filter. Now I’m trying to improve
> it
> > based on this paper: Overlapped Block Motion Compensation: An
> > Estimation-Theoretic Approach
>
> > <
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8359=rep1=pdf
> >
>
> this is 22 years old
>
>
> > and
> > this one: Window Motion Compensation
> > .Takes a lot of time
>
> this is 25 years old
>
> not saying old papers are bad, just that this represents the knowledge
> of 20 years ago
>
> also its important to keep in mind that blind block matching of any
> metric will not be enough. To find true motion the whole motion
> vector fields of multiple frames will need to be considered
>
> For example a ball thrown accross the field of view entering and
> exiting the picture needs to move smoothly and at the ends (in time)
> there are frames without the ball then a frame with the ball
> these 2 are not enough to interpolate the frames between as we have
> just one location where the ball is. With the next frames though
> we can find the motion trajectory of the ball and interpolate it end
> to end
>
> I think papers which work on problems like this and also interpolation
> of all the areas that end up overlapping and covering each other
> like the backgroud behind the ball in that example would be better
> starting points for implementing motion estiation because ultimatly
> that is the kind of ME code we would like to have.
> Block matching with various windows, OBMC, ... are all good but
> if in our example the vectors for the ball or background are off that
> will look rather bad with any motion compensation
> So trying to move a bit toward this would make sense but first
> having some motion estimation even really basic and dumb with
> mc working in a testable filter (pair) should probably be done.
> Iam just mentioning this as a bit of a preview of what i hope could
> eventually be implemented, maybe this would be after GSoC but its
> the kind of code needed to have really usable frame interpolation
>
>
>
> > reading them. I think we need to add new Raised Cosine window (weights)
> > along with Linear Window (currently implemented). What do you say?
>
> i dont know, the windows used in snow are already the best of several
> tried (for snow).
> no great gains will be found by changing the OBMC window from snow.
>
>
> >
> > Also making mInterpolate work with variable macroblock size MC. The
> current
> > interpolation works without half pel accuracy, though.
>
> mcfps has fully working 1/4 pel OBMC code, that should be fine to be
> used as is i think unless i miss something
>
> half pel is 20 years old, it is not usefull
> multiple block sizes on the MC side should not really matter ATM
> smaller blocks are a bit slower but first we should get the code
> working, then working with good quality and then working fast.
>
> multiple block sizes may be usefull for the estimation side if it
> improves estimation somehow.
>
> Can i see your current "work in progress" ?
>
>
> [...]
> > I’m moving estimation code to some new file motion_est.c file and the
> > methods are shared by both mEstimate and mInterpolate filters. mEstimate
> > store the MVs in frame’s side data for any other filter. Moreover, any
> > other filter if need post processing on MVs it can directly use the
> shared
> > methods. But, mInterpolate use them internally, no saving in sidedata,
> and
> > saving unnecessary processing.
>
> This design sounds good
>
>
> >
> >
> > Also, Paper [1] doesn’t uses window with OBMC at all. It just find normal
> > average without weight. Perhaps to compare papers I either need to add
> > multiple option for each setting or need to assign the algorithm as
> > researcher’s name in filter options.
>
>
>
Paper [1] and [2] uses functions or do post processing on motion vectors,
so needs fast ME algorithms, which currently I’m working on. [*M]

Let me summarize the papers (from Email 1, this thread):

Paper [1]: Zhai et al. (2005) A Low Complexity Motion Compensated Frame
Interpolation Method

[Quote]
This paper propose a MCFI method intended for real time processing. It
first examines the motion vectors in the bitstream [*1]. 8x8 block size is
used rather than 16x16 as in most cases; Using smaller block size leads to
denser motion field, so neighboring MVs are more highly correlated, so
prediction is better. To reduce complexity, MVs in bitstream are utilized
[*1]. But need to be filtered as not all of them represent true motion.
They are grouped into “good vectors, can be used directly” and “bad
vectors, need to find true motion”. For classification of MVs into groups,
SAD and BAD is used. For an 8x8 block in to-be-interpolated 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-19 Thread Michael Niedermayer
On Fri, Jun 17, 2016 at 11:45:51PM +0200, Michael Niedermayer wrote:
> On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
[...]
> Can i see your current "work in progress" ?

I found https://github.com/dsmudhar/FFmpeg, i somehow wasnt aware of
that repo even though you posted it in a patch previously, i must
have missed that.

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Democracy is the form of government in which you can choose your dictator


signature.asc
Description: Digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-17 Thread Michael Niedermayer
On Fri, Jun 17, 2016 at 08:19:00AM +, Davinder Singh wrote:
> On Wed, Jun 15, 2016 at 5:04 PM Michael Niedermayer 
> wrote:
> 
> > Hi
> >
> > On Tue, May 31, 2016 at 10:43:38PM +, Davinder Singh wrote:
> > > There’s a lot of research done on Motion Estimation. Depending upon the
> > > intended application of the resultant motion vectors, the method used for
> > > motion estimation can be very different.
> > >
> > > Classification of Motion Estimation Methods:
> > >
> > > Direct Methods: In direct methods we calculate optical flow
> > >  in the scene.
> > >
> > > - Phase Correlation
> > >
> > > - Block Matching
> > >
> > > - Spatio-Temporal Gradient
> > >
> > >  - Optical flow: Uses optical flow equation to find motion in the scene.
> > >
> > >  - Pel-recursive: Also compute optical flow, but in such a way that allow
> > > recursive computability on vector fields)
> > >
> > > Indirect Methods
> > >
> > > - Feature based Method: Find features in the frame, and used for
> > estimation.
> > >
> > > Here are some papers on Frame Rate Up-Conversion (FRUC):
> > >
> > > Phase Correlation:
> > >
> > > This method relies on frequency-domain representation of data, calculated
> > > using fast Fourier transform.
> > >  Phase Correlation
> > > provides a correlation surface from the comparison of images. This
> > enables
> > > the identification of motion on a pixel-by-pixel basis for correct
> > > processing of each motion type. Since phase correlation operates in the
> > > frequency rather than the spatial domain, it is able to zero in on
> > details
> > > while ignoring such factors as noise and grain within the picture. In
> > other
> > > words, the system is highly tolerant of the noise variations and rapid
> > > changes in luminance levels that are found in many types of content –
> > > resulting in high-quality performance on fades, objects moving in and out
> > > of shade, and light ashes.
> > >
> > > Papers:
> > >
> > > [1] "Disney Research » Phase-Based Frame Interpolation for Video." IEEE
> > > CVPR 2015 
> > >
> > > [2] Yoo, DongGon et al. "Phase Correlated Bilateral Motion Estimation for
> > > Frame Rate Up-Conversion." The 23rd International Technical Conference on
> > > Circuits/Systems, Computers and Communications (ITC-CSCC Jul. 2008.
> > >
> > > 
> > >
> > > The video on paper [1] page demonstrate comparison between various
> > methods.
> > >
> > > Optical Flow:
> > >
> > > http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf
> > >
> > > [3] Brox et al. "High accuracy optical flow estimation based on a theory
> > > for warping." Computer Vision - ECCV 2004: 25-36.
> > >
> > > <
> > >
> > http://www.wisdom.weizmann.ac.il/~/vision/courses/2006_2/papers/optic_flow_multigrid/brox_eccv04_of.pdf
> > > >
> > >
> > > Slowmovideo  open-source project is
> > based
> > > on Optical flow equation.
> > >
> > > Algorithm we can implement is based on block matching method.
> > >
> > > Motion Compensated Frame Interpolation
> > >
> > > Paper:
> > >
> > > [4] Zhai et al. "A low complexity motion compensated frame interpolation
> > > method." IEEE ISCAS 2005: 4927-4930.
> > >
> > > 
> > >
> > > Block-based motion estimation and pixel-wise motion estimation are the
> > two
> > > main categories of motion estimation methods. In general, pixel-wise
> > motion
> > > estimation can attain accurate motion fields, but needs a substantial
> > > amount of computation. In contrast, block matching algorithms (BMA) can
> > be
> > > efficiently implemented and provide good performance.
> > >
> > > Most MCFI algorithms utilize the block-matching algorithm (BMA) for
> > motion
> > > estimation (ME). BMA is simple and easy to implement. It also generates a
> > > compactly represented motion field. However, unlike video compression, it
> > > is more important to find true motion trajectories in MCFI. The objective
> > > of MC in MCFI is not to minimize the energy of MC residual signals, but
> > to
> > > reconstruct interpolated frames with better visual quality.
> > >
> > > The algorithm uses motion vectors which are embedded in bit-stream. If
> > > vectors exported by codec (using +export_mvs flag2) are used when
> > > available, computation of the motion vectors will be significantly
> > reduced
> > > for realtime playback. Otherwise the mEstimate filter will generate MVs,
> > > and to make the process faster, same algorithms (used by x264 and x265) -
> > > Diamond, Hex, UMH, Star will be implemented in the filter. Other filter -
> > > mInterpolate will use the MVs in the frame side data to interpolate
> > frames
> > > using various methods - OBMC (Overlapped block motion 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-17 Thread Davinder Singh
On Wed, Jun 15, 2016 at 5:04 PM Michael Niedermayer 
wrote:

> Hi
>
> On Tue, May 31, 2016 at 10:43:38PM +, Davinder Singh wrote:
> > There’s a lot of research done on Motion Estimation. Depending upon the
> > intended application of the resultant motion vectors, the method used for
> > motion estimation can be very different.
> >
> > Classification of Motion Estimation Methods:
> >
> > Direct Methods: In direct methods we calculate optical flow
> >  in the scene.
> >
> > - Phase Correlation
> >
> > - Block Matching
> >
> > - Spatio-Temporal Gradient
> >
> >  - Optical flow: Uses optical flow equation to find motion in the scene.
> >
> >  - Pel-recursive: Also compute optical flow, but in such a way that allow
> > recursive computability on vector fields)
> >
> > Indirect Methods
> >
> > - Feature based Method: Find features in the frame, and used for
> estimation.
> >
> > Here are some papers on Frame Rate Up-Conversion (FRUC):
> >
> > Phase Correlation:
> >
> > This method relies on frequency-domain representation of data, calculated
> > using fast Fourier transform.
> >  Phase Correlation
> > provides a correlation surface from the comparison of images. This
> enables
> > the identification of motion on a pixel-by-pixel basis for correct
> > processing of each motion type. Since phase correlation operates in the
> > frequency rather than the spatial domain, it is able to zero in on
> details
> > while ignoring such factors as noise and grain within the picture. In
> other
> > words, the system is highly tolerant of the noise variations and rapid
> > changes in luminance levels that are found in many types of content –
> > resulting in high-quality performance on fades, objects moving in and out
> > of shade, and light ashes.
> >
> > Papers:
> >
> > [1] "Disney Research » Phase-Based Frame Interpolation for Video." IEEE
> > CVPR 2015 
> >
> > [2] Yoo, DongGon et al. "Phase Correlated Bilateral Motion Estimation for
> > Frame Rate Up-Conversion." The 23rd International Technical Conference on
> > Circuits/Systems, Computers and Communications (ITC-CSCC Jul. 2008.
> >
> > 
> >
> > The video on paper [1] page demonstrate comparison between various
> methods.
> >
> > Optical Flow:
> >
> > http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf
> >
> > [3] Brox et al. "High accuracy optical flow estimation based on a theory
> > for warping." Computer Vision - ECCV 2004: 25-36.
> >
> > <
> >
> http://www.wisdom.weizmann.ac.il/~/vision/courses/2006_2/papers/optic_flow_multigrid/brox_eccv04_of.pdf
> > >
> >
> > Slowmovideo  open-source project is
> based
> > on Optical flow equation.
> >
> > Algorithm we can implement is based on block matching method.
> >
> > Motion Compensated Frame Interpolation
> >
> > Paper:
> >
> > [4] Zhai et al. "A low complexity motion compensated frame interpolation
> > method." IEEE ISCAS 2005: 4927-4930.
> >
> > 
> >
> > Block-based motion estimation and pixel-wise motion estimation are the
> two
> > main categories of motion estimation methods. In general, pixel-wise
> motion
> > estimation can attain accurate motion fields, but needs a substantial
> > amount of computation. In contrast, block matching algorithms (BMA) can
> be
> > efficiently implemented and provide good performance.
> >
> > Most MCFI algorithms utilize the block-matching algorithm (BMA) for
> motion
> > estimation (ME). BMA is simple and easy to implement. It also generates a
> > compactly represented motion field. However, unlike video compression, it
> > is more important to find true motion trajectories in MCFI. The objective
> > of MC in MCFI is not to minimize the energy of MC residual signals, but
> to
> > reconstruct interpolated frames with better visual quality.
> >
> > The algorithm uses motion vectors which are embedded in bit-stream. If
> > vectors exported by codec (using +export_mvs flag2) are used when
> > available, computation of the motion vectors will be significantly
> reduced
> > for realtime playback. Otherwise the mEstimate filter will generate MVs,
> > and to make the process faster, same algorithms (used by x264 and x265) -
> > Diamond, Hex, UMH, Star will be implemented in the filter. Other filter -
> > mInterpolate will use the MVs in the frame side data to interpolate
> frames
> > using various methods - OBMC (Overlapped block motion compensation),
> simple
> > frame blending and frame duplication etc.
> >
> > However, MVs generated based on SAD or BAD might bring serious artifacts
> if
> > they are used directly. So, the algorithm first examines the motion
> vectors
> > and classify into two groups, one group with vectors which are 

Re: [FFmpeg-devel] [GSoC] Motion Interpolation

2016-06-15 Thread Michael Niedermayer
Hi

On Tue, May 31, 2016 at 10:43:38PM +, Davinder Singh wrote:
> There’s a lot of research done on Motion Estimation. Depending upon the
> intended application of the resultant motion vectors, the method used for
> motion estimation can be very different.
> 
> Classification of Motion Estimation Methods:
> 
> Direct Methods: In direct methods we calculate optical flow
>  in the scene.
> 
> - Phase Correlation
> 
> - Block Matching
> 
> - Spatio-Temporal Gradient
> 
>  - Optical flow: Uses optical flow equation to find motion in the scene.
> 
>  - Pel-recursive: Also compute optical flow, but in such a way that allow
> recursive computability on vector fields)
> 
> Indirect Methods
> 
> - Feature based Method: Find features in the frame, and used for estimation.
> 
> Here are some papers on Frame Rate Up-Conversion (FRUC):
> 
> Phase Correlation:
> 
> This method relies on frequency-domain representation of data, calculated
> using fast Fourier transform.
>  Phase Correlation
> provides a correlation surface from the comparison of images. This enables
> the identification of motion on a pixel-by-pixel basis for correct
> processing of each motion type. Since phase correlation operates in the
> frequency rather than the spatial domain, it is able to zero in on details
> while ignoring such factors as noise and grain within the picture. In other
> words, the system is highly tolerant of the noise variations and rapid
> changes in luminance levels that are found in many types of content –
> resulting in high-quality performance on fades, objects moving in and out
> of shade, and light ashes.
> 
> Papers:
> 
> [1] "Disney Research » Phase-Based Frame Interpolation for Video." IEEE
> CVPR 2015 
> 
> [2] Yoo, DongGon et al. "Phase Correlated Bilateral Motion Estimation for
> Frame Rate Up-Conversion." The 23rd International Technical Conference on
> Circuits/Systems, Computers and Communications (ITC-CSCC Jul. 2008.
> 
> 
> 
> The video on paper [1] page demonstrate comparison between various methods.
> 
> Optical Flow:
> 
> http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf
> 
> [3] Brox et al. "High accuracy optical flow estimation based on a theory
> for warping." Computer Vision - ECCV 2004: 25-36.
> 
> <
> http://www.wisdom.weizmann.ac.il/~/vision/courses/2006_2/papers/optic_flow_multigrid/brox_eccv04_of.pdf
> >
> 
> Slowmovideo  open-source project is based
> on Optical flow equation.
> 
> Algorithm we can implement is based on block matching method.
> 
> Motion Compensated Frame Interpolation
> 
> Paper:
> 
> [4] Zhai et al. "A low complexity motion compensated frame interpolation
> method." IEEE ISCAS 2005: 4927-4930.
> 
> 
> 
> Block-based motion estimation and pixel-wise motion estimation are the two
> main categories of motion estimation methods. In general, pixel-wise motion
> estimation can attain accurate motion fields, but needs a substantial
> amount of computation. In contrast, block matching algorithms (BMA) can be
> efficiently implemented and provide good performance.
> 
> Most MCFI algorithms utilize the block-matching algorithm (BMA) for motion
> estimation (ME). BMA is simple and easy to implement. It also generates a
> compactly represented motion field. However, unlike video compression, it
> is more important to find true motion trajectories in MCFI. The objective
> of MC in MCFI is not to minimize the energy of MC residual signals, but to
> reconstruct interpolated frames with better visual quality.
> 
> The algorithm uses motion vectors which are embedded in bit-stream. If
> vectors exported by codec (using +export_mvs flag2) are used when
> available, computation of the motion vectors will be significantly reduced
> for realtime playback. Otherwise the mEstimate filter will generate MVs,
> and to make the process faster, same algorithms (used by x264 and x265) -
> Diamond, Hex, UMH, Star will be implemented in the filter. Other filter -
> mInterpolate will use the MVs in the frame side data to interpolate frames
> using various methods - OBMC (Overlapped block motion compensation), simple
> frame blending and frame duplication etc.
> 
> However, MVs generated based on SAD or BAD might bring serious artifacts if
> they are used directly. So, the algorithm first examines the motion vectors
> and classify into two groups, one group with vectors which are considered
> to represent “true” motion, other having “bad” vectors, then carries out
> overlapped block bi-directional motion estimation on corresponding blocks
> having “bad” MVs. Finally, it utilizes motion vector post-processing and
> overlapped block motion compensation to generate interpolated frames