Hi Robert, sorry to be a pain with this but I'm still having problems.
A bit more context might help you here. I've got a scenario where I have a toggle that switches various camera configurations on and off. Thus the number of cameras active at any one time is variable. When I decide I want to generate an image, I'm basically taking the existing cameras that are active, reassigning a different RenderSurface object to the cameras, calling the render cycle, and interrogating the cameras to get the images, combining them, then writing one big image to file. Now I've created a slightly simpler setup where by I don't create a new RenderSurface but merely use the one the cameras were originally created with, and this works fine, but at the screen resolution. In the more complex scenario, I'm not getting the image of the rendered world. Previously, before you put me onto the pbuffer, my image was completely empty. Now, having added the following lines to my rendersurface :- pNewRS->setWindowRectangle(0, 0,5000,3750); pNewRS->setDrawableType(Producer::RenderSurface::DrawableType_PBuffer ); pNewRS->setRenderToTextureMode(Producer::RenderSurface::RenderToRGBTexture); my image is no longer blank, but a corrupted form of some screen buffer somewhere, but not the rendered world. My thoughts are that perhaps I should create cameras specifically for the pbuffer prior to calling the viewer-realize() function, because its almost as if the rendersurface is not being included in the render cycle. Now if this is the case this is a bit a problem, as I would have to duplicate the 12 cameras I currently have, but only for the purpose of the pbuffer. Have I missed something? Do you have a small example? I tried the osgpbuffer example, but it failed to run - some problem with the model I think. Any help would be cool Neil. _______________________________________________ osg-users mailing list [email protected] http://openscenegraph.net/mailman/listinfo/osg-users http://www.openscenegraph.org/
