Eduardo Alberto Hernández Muñoz wrote:
My first problem is that the normal approach is to attach a texture to a camera, but this means that I will get the data from the next frame since I will have to render it again; I suppose I could render the image again but this seems like a waste, instead I would like to copy the entire screen, what would be in OpenGL the front buffer of the entire window.
Is your scene prohibitively huge such that it would take too long to render? If not, this is certainly the easiest approach.
Each frame consists of event, update, cull, and draw. So, in order to capture a rendered frame, I assume you find out that you need to do the capture during event, then you would want to skip update and cull, and just do a glReadPixels during draw. I believe you could do this with a top level Switch. Normally, the Switch would be configured to render the scene graph, but if the user requests a capture during event, the event handler would toggle the Switch to render another Camera that draws noting, but has a post-draw callback configured to do a glReadPixels on the front buffer. Not sure whether this would work or not, but it's my best guess.
It would be easier to just render another frame and grab the image from the back buffer.
The second problem is that, in case that I'm forced to waste resources by redrawing the frame camera by camera, how could I join the 4 textures?
Standard techniques, not specific to OSG, should work: Use memcpy. Or configure the glViewport to render into the four quadrants of the window.
-Paul _______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

