Here's the link to the avbin feature request:
http://code.google.com/p/avbin/issues/detail?id=17

Tried to make some performance tests. The results are disappointing
until now: no significant improvements for small to medium sized
videos. Probably some other processing step dominates the timing. I
cannot test large HD content (which in theory would benefit the most
from the shader approach) because the new code segfaults on me. I
guess the reason for this is the same as for the image artifacts
mentioned in the last post, which I presume is a multi-threading
issue.

Any help with this would be appreciated.

Jan

On Nov 26, 8:34 pm, Jan Bölsche <[email protected]>
wrote:
> Hi!
>
> For my application pyglet's video playback performance isn't
> sufficient when playing back HD Mpeg-2 video streams. I found one
> relatively easy way to reduce CPU load by letting the GPU perform the
> necessary  YUV to RGB color space conversion as pointed out in this
> excellent blog post by
> Michael Dominic 
> Kostrzewa:http://www.mdk.org.pl/2007/11/17/gl-colorspace-conversions
>
> I managed to make media_player.py work with a fragment shader doing
> the color space conversion (see patch attached). The shader was
> borrowed from here:http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c
>
> This patch must be considered very experimental since there are a few
> problems with the code:
>
> 1. Unfortunately AVbin explicitly calls img_convert() in its
> avbin_decode_video() function. img_convert() does color space
> conversion using the CPU. This makes avbin_decode_video() pretty
> useless for our purpose. So I reimplemented it in avbin.py without
> calling img_convert(). I had to use a private structure of AVbin as
> well as some ctypes declarations for libavcodec structures and one
> function. These will probably break with the next release of AVbin.
> This issue could easily be resolved by adding a function to the
> official API of AVbin that does not call img_convert() (which is
> deprecated anyway). I sent a feature request to the AVbin project.
>
> 2. In order to be able to let the GPU perform YUV to RGB conversion,
> the three image planes (Y, U and V) from the video decoder's output
> need to be present in three separate textures. I made this happen by
>   - adding an array 'textures' to pyglet.media.Player which holds the
> three textures (plus a potential fourth alph plane) alongside the pre-
> existing "texture" field with is now equivalent to textures[0] (aka
> the luminance channel)
>   - AVbinSource.get_next_video_frame() now returns a list of 3 image
> instances
> The above isn't exactly good design.
>
> 3. Although the final video output is as expected in general, there
> are occasional artifacts. I suspect this to be caused by concurrent
> access to AVbinStream.frame, which is the decoder's target buffer. I
> didn't investigate this issue further yet.
>
> I am using pyglet-shaders:http://pyglet-shaders.googlecode.com
>
> I didn't run any benchmarks yet to confirm that this actually causes a
> significantly lower CPU load. But I'd be surprised if it doesn't.
>
> Hoping this is useful for someone:
> Jan

--

You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pyglet-users?hl=en.


Reply via email to