that's an endian bug, make sure you copy the texture contents properly to the NSBitmapImageRep depending if you're running on PPC or X86.

That explains it. I suspected that it had to do something with this.

You can also try drawing the texture you get in into a CVOpenGLBuffer and then pass that to QC. Although there's a VRAM copy, this might have better performances at the end.

That's what I do now. It works perfectly and has a good performance.


One more thing:

Is there a way to use the output of QCRenderer directly as a texture ? (I don't need any mipmaps.) Right now I copy the pixels with glReadPixels and then use gluBuild2DMipmaps to create a texture. The performance of that is quite bad and there must be a way to get this faster. I don't have a NSOpenGLContext to share, as it not a Cocoa application. My program works completely off-screen. The final image has a CGLContextObj (FxImage).

Just use the QCRenderer to render into a CVOpenGLBuffer and then create a CVOpenGLTexture from it

________________________
Pierre-Olivier Latour
[EMAIL PROTECTED]

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to