This is how I've been doing post-rendering effects, too.

However, I have never done any performance benchmarks. My instinct tells me
that this method should have faster cull time than using a Camera, but if
post-rendering cull time makes up only a small percentage of the total cull
time, then I imagine the performance benefit would be difficult to measure.

Have you done any performance comparisons against equivalent use of Camera
nodes?


On Thu, Feb 14, 2013 at 8:33 AM, Wang Rui <wangra...@gmail.com> wrote:

> Hi Robert, hi all,
>
> I want to raise this topic when reviewing my effect
> composting/view-dependent shadow code. As far as I know, most of us use
> osg::Camera for rendering to texture and thus the post-processing/deferred
> shading work. We attach the camera's sub scene to a texture with attach()
> and set render target to FBO and so on. It works fine so far in my client
> work.
>
> But these days I found another way to render scene to FBO based
> texture/image:
>
> First I create a node (include input textures, shaders, and the sub scene
> or a screen sized quad) and apply FBO and Viewport as its state attributes:
>
>     osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D;
>     tex->setTextureSize( 1024, 1024 );
>
>     osg::ref_ptr<osg::FrameBufferObject> fbo = new osg::FrameBufferObject;
>     fbo->setAttachment( osg::Camera::COLOR_BUFFER,
> osg::FrameBufferAttachment(tex) );
>
>     node->getOrCreateStateSet()->setAttributeAndModes( fbo.get() );
>     node->getOrCreateStateSet()->setAttributeAndModes( new
> osg::Viewport(0, 0, 1024, 1024) );
>
> Then if we need more deferred passes, we can add more nodes with screen
> sized quads and set texOutput as texture attributes. The intermediate
> passes require fixed view and projection matrix. So we can add it a cull
> callback like:
>
>     cv->pushModelViewMatrix( new RefMatrix(Matrix()),
> Transform::ABSOLUTE_RF );
>     cv->pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0,
> 1.0)) );
>
>     each_child->accept( nv );
>
>     cv->popProjectionMatrix();
>     cv->popModelViewMatrix();
>
> This works well in my initial tests and it won't require a list of
> osg::Camera classes. I think this would be a light-weighted way for the
> post-processing work as it won't create multiple RenderStages at the
> back-end and will reduce the possibility of having too many nests of
> cameras in a scene graph.
>
> Do you think it useful to have such a class? User input a sub-scene or any
> texture; the class uses multiple passes to process it and output to a
> result texture. The class won't have potential cameras for the RTT work,
> and can be placed anywhere in the scene graph as a deferred pipeline
> implementer, or a pure GPU-based image filter.
>
> I'd like to rewrite my effect compositor implementation with the new idea
> if it is considered necessary, or I will forget it and soon be getting
> ready to submit both the deferred shading pipeline and the new VDSM code in
> the following week. :-)
>
> Cheers,
>
> Wang Rui
>
>
>
> _______________________________________________
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>


-- 
Paul Martz
Skew Matrix Software LLC
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to