Hello

Im working on a video performance application with Quartz Composer/ Cocoa, and am using OpenGL custom views and passing data about via QCPortImage types from one composition to another.

Most of my 'effects' are done with Core Image Kernel effects. A few sources /image generators may use OpenGL to generate the QCPortImage output (ie, not a video, but generated 3D renderings, like say the Sphere patch with a GLSL shader on it for example). This code is basically modified QCTV and Performer code, smushed together in an unseemly way :)

I am wondering if in general, there is any advantage to using multithreaded OpenGL with this usage scenario. I am a bit curious how all this is expected to work behind the scenes.

My somewhat incomplete understanding of this due to the various technologies involved which handle much of the underlying complexity for me, namely core image and its LLVM/Shader compilation and optimization, QCRenderer handling rendering compositions for me behind the scenes, but both are using the OpenGL context ive set up, so my first thought it, turn it on!

So in short, if I have a multi core machine on 10.5.x, does enabling multithreaded opengl make any sense if I start to push my application by loading lots of effects?

Ive tested enabling the MP engine in my app and saw no immediate performance differences or changes in performance profile, but I am am not currently pushing my system hard enough - and my code is most likely not complex enough to see any immediate benefits. I know premature optimization is a general no no, but I am curious how all of these technologies fit together, and would love some enlightening pointers

Thank you,




_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to