Hi,

I was wondering if you can provide me with some guidance regarding RTT. BTW: I 
read and understood the osgMultipleRenderTargets example very well (hopefully 
8)). However, I was wondering if it would be possible to simply it a little for 
my app.

My application has multiple slave cameras using osgViewer (not composite 
viewer). Each camera can show the scene using a different sensor mode (i.e 
thermal, day, nigt-vision, etc). 

On a per camera basis,  I need to render the scene to a texture. Each of those 
cameras then render their respective image to a simple quad in their own 
window. 

The questions is: can I do this without adding a seperate RTT camera for each 
slave camera?

Will enabling RTT for each slave camera and drawing a simple quad in the 
PostDrawCallback/FinalDrawCallback work (using direct opengl calls)? 

Do I absolutely need to add another camera in the scene to make RTT work?

Maybe if I understood the render-to-texture process in the camera a little 
better, especially when it starts and when it ends, then I would have a better 
idea if what I'm trying to is possible. 

Any help would be appreciated.

Cheers,
Guy

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=13317#13317





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to