Hi Rui, The cost of traversing an osg::Camera in cull should be very small, and one can avoid using a separate RenderStage by using a NESTED_TRAVERSAL. Using a different custom Node to do similar RTT setup is done for osg::Camera will incur similar CPU and GPU costs, so I unless there is really sound reason for providing an alternative then I'd rather just stick with osg::Camera as it'll just be code to maintenance and more code to teach people how to use with the additional hurdle of having two ways to do the same thing. If need be perhaps osg::Camera could be extended if it isn't able to handle all the current required usage cases.
As for scalability, you can't have too many osg::Camera nested or not in the scene graph, the only limit is the amount of memory available and your imagination, both limits apply to any scheme you come up with for doing something similar to osg::Camera. While osg::Camera can do lots of tasks, in itself it isn't a large object, it's only the buffer/textures that you attach that are significant in size, and this applies too approaches. Right now I'm rather unconvinced there is pressing need for an alternative. Robert. On 14 February 2013 15:33, Wang Rui <[email protected]> wrote: > Hi Robert, hi all, > > I want to raise this topic when reviewing my effect > composting/view-dependent shadow code. As far as I know, most of us use > osg::Camera for rendering to texture and thus the post-processing/deferred > shading work. We attach the camera's sub scene to a texture with attach() > and set render target to FBO and so on. It works fine so far in my client > work. > > But these days I found another way to render scene to FBO based > texture/image: > > First I create a node (include input textures, shaders, and the sub scene or > a screen sized quad) and apply FBO and Viewport as its state attributes: > > osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D; > tex->setTextureSize( 1024, 1024 ); > > osg::ref_ptr<osg::FrameBufferObject> fbo = new osg::FrameBufferObject; > fbo->setAttachment( osg::Camera::COLOR_BUFFER, > osg::FrameBufferAttachment(tex) ); > > node->getOrCreateStateSet()->setAttributeAndModes( fbo.get() ); > node->getOrCreateStateSet()->setAttributeAndModes( new osg::Viewport(0, > 0, 1024, 1024) ); > > Then if we need more deferred passes, we can add more nodes with screen > sized quads and set texOutput as texture attributes. The intermediate passes > require fixed view and projection matrix. So we can add it a cull callback > like: > > cv->pushModelViewMatrix( new RefMatrix(Matrix()), Transform::ABSOLUTE_RF > ); > cv->pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0, > 1.0)) ); > > each_child->accept( nv ); > > cv->popProjectionMatrix(); > cv->popModelViewMatrix(); > > This works well in my initial tests and it won't require a list of > osg::Camera classes. I think this would be a light-weighted way for the > post-processing work as it won't create multiple RenderStages at the > back-end and will reduce the possibility of having too many nests of cameras > in a scene graph. > > Do you think it useful to have such a class? User input a sub-scene or any > texture; the class uses multiple passes to process it and output to a result > texture. The class won't have potential cameras for the RTT work, and can be > placed anywhere in the scene graph as a deferred pipeline implementer, or a > pure GPU-based image filter. > > I'd like to rewrite my effect compositor implementation with the new idea if > it is considered necessary, or I will forget it and soon be getting ready to > submit both the deferred shading pipeline and the new VDSM code in the > following week. :-) > > Cheers, > > Wang Rui > > > > _______________________________________________ > osg-users mailing list > [email protected] > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org > _______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

