Hi !
I'am writing a program that needs a RBGA video format to send data to an opengl alpha masked texture. Thus, for convenience, I decided to encode my video frames in two files. -The fist contain RGB data with maked pixel always the same color. Thus converting frames with a MPEG4 video codec give good file size and doesn't waste data to encode useless pixels. -The second contain Alpha data encoded into my own codec and file format. I try to use libavcodec to decode as fast as possible the first file to send RGB data to my RGBA texture buffer. But swscale don't give me an accelered YUV to RGBA colorspace conversion, so décoding frames is too slow for me. So rather than converting the entire YUV frame, I would like to convert only the updated pixels given by the P frames. Does swsccale or libavcodec have a particular optimization for this sort of inplace decoding ? (the swscale destination buffer is not changed so it can be updated only by applying diff) Moreover, in my case the number of updated pixels is often very low !
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
