Hello,

I'm trying to learn the intracacies of rendering to textures using osg, and 
I've dissected the osgfpdepth example to do it.  I've been largely successful 
at resolving the code into a much simpler example for rendering using the color 
buffer only.

Anyway, my goal was to have both the main camera and the RTT camera rendering 
the same geometry.  The RTT camera, of course, would have different state in 
it's path that would create a different effect on the geometry.

However, it seems the osgfpdepth example isn't set up to do this.  That is, 
depth buffering is enabled on the root node with the code:


Code:

    osg::Depth* depth = new osg::Depth( osg::Depth::GEQUAL, 1.0, 0.0);
    sceneSS->setAttributeAndModes(depth,(osg::StateAttribute::ON
                                         | osg::StateAttribute::OVERRIDE));




Why depth buffering has to be enabled for the slave camera to render-to-texture 
I'm not sure, but I do know that doing so means the geometry isn't rendered by 
the main camera view.  That is, either the main camera or the RTT camera 
renders the geometry, but not both.  Thus the need to create a second texture 
slave camera to render the RTT output to the screen.

Given that I'm new to render-to-texture methods and techniques, I've no doubt 
there's some obvious reasoning that I've missed.  Nevertheless, I'd appreciate 
any comments that might illuminate the issue.

Thanks,

Joel

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48005#48005





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to