On Sat, Dec 22, 2012 at 10:22 AM, Oleg <[email protected]> wrote:
> You're misunderstanding some concepts. There are two operations that can be 
> accelerated by GPU: decoding and yuv->rgb converting. First thing can be 
> achieved by vaapi as you mentioned. Second, by using OpenGL shader(I prefer 
> OpenGL as it's cross-platform. Other option is to use DirectX on Win 
> platform) that will convert YUV->RGB and draw converted frame immediately.

If you don't use a shader, YUV-RGB conversion not only consumes
cycles. You can actually ignore the cycles it takes on most modern
CPUs. The problem is RGB requires a lot more bandwidth to upload back
into the CPU, and that usually becomes a bottleneck on lower-end
systems for HD content (which requires quite a lot of bandwidth).

Hence my "naively" comment earlier. You don't only need to get the
result in YUV to the shader, but also in downsampled YUV (ie: 4:1:1,
4:2:1, 4:2:2 or whatever format the codec is using internally).
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to