Hi,
I know there has been several questions about how sws_scale works,
and I thought I had it clear. However even after reading some examples
in swscale-example.c and ffmpeg.c, I do not manage to make my YUV420p ->
RGB conversion work.
Basically I have a raw data stored in YUV420p format (in a u_char*), but
in order to use it as a texture on my OpenGL primitive, I want do the
conversion to RGB.
Here is what I do :
sws_src_stride[0] = width_;
sws_src_stride[1] = sws_src_stride[2] = width_/2;
sws_tar_stride[0] = outw_*bytes_per_pixel;
sws_tar_stride[1] = sws_tar_stride[2] = 0;
sws_tar[0] = pixbuf_;
sws_tar[1] = sws_tar[2] = NULL;
sws_context = sws_getContext(width_, height_, PIX_FMT_YUV420P, outw_,
outh_, PIX_FMT_RGB32, flags, NULL, NULL, NULL);
sws_src[0] = (uint8_t*)frm;
sws_src[1] = sws_src[0] + framesize_;
sws_src[2] = sws_src[1] + framesize_/4;
sws_scale(sws_context, sws_src, sws_src_stride, 0, height_, sws_tar,
sws_tar_stride);
Then my question is, how does sws_tar contain the RGB data ? And
consequently how should it be transferred to my opengl texture loading
function ?
Thanks for any help, I do not have it quite clear what the fields of the
u_int * dst[]; in the libavcodec stand for, and I doubt they represent
each of them one channel r, g, b...
Thibault.
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user