On 03/28/2014 12:18 PM, cyril poulet wrote:

As a matter of fact, I wanted to use them for computer vision, where calculating edge densities and motion estimation are important. Now that these are deprecated, the only way is to first decode each frame then re-calculate MVs and DCTs, which is computationnally costly...

You probably don't want to use them for computer vision; They are specifically optimized for minimizing visual artifacts on one hand, and staying within protocol constraints on the other (e.g. some streams have no I-frames but do have guaranteed "frame convergence" - which means the motion on those is guaranteed to be "wrong").

It's not that they can't be useful, it's that the MV information is "very noisy" compared to what you'd get from computer vision stuff. Doing a pyramid Lucas-Kanade (openCV and libccv both have good implementations, supposedly) does not require a lot of CPU and provides vastly better motion estimation for computer vision.


_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to