Hi all, For one of the projects I'm working on I need to have several render passes which operate on the rendered scene as a texture. I was wondering if I could base the implementations on the osgFX::Effect class. I know the existing effects are all based around simply using different statesets, but would it be an inappropriate use of the Effect class to write Techniques which contain a render-to-texture camera and then operate on that texture?
A few examples of the sort of thing I'm doing are as follows: 1) blurring the image using a Gaussian kernel. 2) computing the min and max pixel value in the image in order to "downgrade" HDR imagery to get the expected effects when an unusually bright object appears in the field of view. 3) adding noise to the image to simulate a poor quality camera. I am a little concerned that this may not be a proper use for the Effect class as this sort of process probably has to be done at the root of the scene graph, where as the existing effects can be applied to any subgraph. However, I do like the idea of implementing these processes as nodes in the graph that can be saved out to .osg files etc, but I can probably achieve this by writing my own class rather than extending Effect (though that would take me more time). I would greatly appreciate hearing people's thoughts on this matter. Thanks in advance, Rob.
_______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

