Hi Stephan,

from my knowledge RenderBuffer is nothing else than a logical buffer which 
can't be read in a shader programm or somewhere else. If you want to read 
values from the renderbuffers you have to copy them first into a texture. 
Hence, I am not really clear, how this should be helpful for osgPPU. 

osgPPU is meant to be a processing engine of input data to output. Input data 
is in most cases are camera attachments (colorbuffer, depthbuffer, stencil, 
...). In order to process them one has to use textures, otherwise processing is 
useless, I think.

In RenderStage.cpp there is also renderbuffers attached to the fbo, even if 
multisampling is not used. Any camera attachment can be reached by 
UnitCameraAttachmentBypass unit in the osgPPU. This things do also work pretty 
well, for depth buffer and color buffer (for which render buffer is also used). 
Hence, I am pretty sure, that if you use the multisampled render buffer for the 
camera, there will be no difference to the usual use. However, I have not tried 
this before, and therefor can not be sure for 100 per cent.

As for the internal FBOs of osgPPU, I do not see any benefit of using 
RenderBuffers, because from them you can not read any data and hence data can 
not be processed. Therefor, I am not really sure, how they could help. However, 
I would like to test, if this will bring some performance benefit, but I am 
very sceptical.

Art

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=7344#7344





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to