Hi,

I found the cause of the problem myself. When the scene graph is traversed a 
osgUtil::RenderStage is created for the camera and added to the RenderCache of 
the camera. So only one RenderStage per CullVisitor is created. When the 
RenderStage is drawn it sets a flag that is was already drawn this frame.
So no matter how often you add a camere to the scene graph, it's scene is only 
rendered once. This optimization makes sence in most cases, but in my special 
case it would be great if I could turn it off.

I currently have two solutions for this problem:

1. A PostDrawCallback that resets the RenderStage flag by accessing the 
RenderCache of the current camera. This solution is extremly ugly. I had to 
copy the interal osg class osgUtil::RenderingCache because it is in no header 
file. A very bad hack: If the code of this class changes or the compiler 
creates a different memory layout(different optimizations turned on etc.) the 
dynamic_cast from osg::Object to osgUtil::RenderingCache can go terribly wrong.

2. Create a shallow copy of the virtual texturing scene graph. The root node 
and pre render camera is copied while the subgraph with the geometry is reused. 
The pre render pass with its callbacks will be called per depth peeling pass as 
expected and the waste of memory is not that big.

I will implement the second solution because the first one is obviously not the 
way to go if I don't want to run in stability issues.


Thank you!

Cheers,
Marcel

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=54109#54109





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to