Hi, I'm developing a module for paying from a capture card (within a video framework that uses libav in other modules) and I'd like to use the libav yadif filter.
I have stored the captured picture in PIX_FMT_UYVY422 pixel format and I've been able to use the swscale for converting the image and avpicture_deinterlace to apply the old libav deinterlacing method, but the results aren't as good as expected. First of all, I have to make one copy and convert the image to PIX_FMT_YUV422P (the one this method requires), apply the deinterlace method and then re-convert the result image to the needed original pixel format, so it would be 3 copies and 3 modifications... a huge delay for such bad results. I have an AVPicture source frame (easy to convert to an AVFrame) and I need to get the same as result. Questions: - Do I need to convert my original frame to an specific pixel format? - Do I need another data structure for frame storage (vector/array/list)? - Is there any independent function to do that (inside vf_yadif.c) I could use? Thank you in advance, Hector.
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
