Hello all,

I am using gstreamer 1.14 on an aarch64 based embedded Linux system with 
glimagesink. This particular platform uses an X11 EGL backend for GPU usage. I 
can successfully get video images on my screen, but am running into some real 
performance bottlenecks with dropped frames if the images get relatively large. 
This occurs in any pipeline, even pure gstreamer OpenGL ones. So for simplified 
testing purposes, to try and find the bottleneck, I have been using a simple 
OpenGL pipeline like:

gst-inspect-1.0 gltestsrc ! glimagesink

Turning on various levels of gst debugging, and also comparing the behavior of 
running the same pipeline on my x64 Linux machine, I managed to observe some 
different behavior between the two.

My PC seems to allocate a handful of textures (and buffers obviously) when the 
pipeline is created, and then appears to reuse these throughout the lifetime of 
the application.

The embedded system however, allocates a new texture with every frame. I 
believe that this may be causing the performance penalty I am seeing and 
essentially blocking the pipeline for a longer period of time with each frame.

Obviously I have two different display platforms here so it's not an apples to 
apples comparison. What I am trying to do however is find where and why this 
behavior is invoked for this display platform to determine if it can be 
optimized. The call to actually create the texture is _gl_tex_create() within 
gstglmemory but I am continuing to try and trace this further up the pipe. In 
the mean time, I wanted to reach out here and see if this rings any bells with 
anyone that might have had a similar experience.

Thanks!

Sincerely,
Ken Sloat

_______________________________________________
gstreamer-embedded mailing list
gstreamer-embedded@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/gstreamer-embedded

Reply via email to