Hi all,
As a hobby I have been looking into and experimenting with image restoration
techniques. As of late I have been trying out multi-frame super resolution and
I'm thinking about looking into MPEG compression.
To give a bit of context, in image restoration you try to recover a degraded
image. Usually this is done by creating a model for how the image was degraded
and then try to estimate the image which when passed through that model will
give the degrade image. Thus the better you understand how the image was
degraded, the better estimation you can make.
With MPEG compression you want to know as much as possible about the decoding
process to better estimate the parameters in the degradation model.
So I want to dig deeper than just the resulting YCrCb frames but I don't know
how this is most easily done, reading a media container and decoding the video
from scratch is just too much work. I know you can get the motion vectors from
libav, but what else does it expose? (I'm primarily interested in H.262 and
H.264.) If I need to go deeper, for example to decode the video stream from
scratch, what parts of libav should I look into to avoid having worry about
containers, etc.? Or is there another library which might be a better way to
approach this?
Just to avoid misconceptions, I want to do this more because of technical
curiosity than trying to create a solution for a problem. It would be a fun way
to learn more about video compression and image restoration but avoiding to
reimplement everything would make it a tad more realistic...
Sebastian Wahl
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user