Hi Sebastian,

> De: Sebastian Messerschmidt
>
> I know this question has been asked before, but I didn't extract
> enough information out of the answers to understand it fully.
> What I want to achive seems simple. I have separate renderpasses with
> the same viewpoint to render different parts of the scene to different
> rendertargets.
> For example I need to split the opaque parts from the transparent/
> translucent ones.
> Right now, I'm using nodemasks on the cameras an respective parts of
> the scene. Unfortunally this brings two major problems:
> 1. Everything has to traversed/culled more by every camera. So if the
> scenegraph is large and complex with node masks only near the leaves,
> the accumulated cull time will be a problem.
> 2. If a geode has transparent and opaque parts it is impossible to do
> it correctly, as drawables are "second" class nodes i.e. don't carry
> nodemasks.
> 
> So right now I'm splitting up geodes with different renderbins an
> reorder them. Naturally this can have a huge impact on the scene, as
> it get cluttered at the bottom effectively cutting my framerate in
> certain scenes to half and below.
> The idea I had in the last day would like this (in a rather abstract
> way):
> 
> Cull the scene once and let different cameras use different
> renderbins to render to their target.
> 
> 
> 
> Now my problem is, that I don't know where to start with this, or if
> there are some things in OSG that might even allow this naturally.

I am currently converting FlightGear to deferred rendering. See
http://wiki.flightgear.org/Project_Rembrandt for pointers and a log
of the conversion (and videos)

To address this particular problem, I have a first camera that 
traverse the scenegraph to initialize the MRT buffers and another 
camera that is used to render lighting. Both have the same view and
projection matrix and share the same depth buffer. Ambient light, 
sun lighting, fog and bloom are rendered with additive blending 
by drawing screen aligned quads while point and spot lights are 
rendered like normal geometry (the light volume is then drawn).

So I can remove transparent bins from the renderstage of the first
camera and inject them in the renderstage of the second. I do that 
with the code below :

As a global variable :
   osgUtil::RenderBin::RenderBinList savedTransparentBins;

In the cull callback of the geometry camera (the one that traverse the
scenegraph) :

    virtual void operator()( osg::Node *n, osg::NodeVisitor *nv)
    {
        osgUtil::CullVisitor* cv = static_cast<osgUtil::CullVisitor*>(nv);
        osg::Camera* camera = static_cast<osg::Camera*>(n);

        savedTransparentBins.clear();

        cv->traverse( *camera );

        // Save transparent bins to render later
        osgUtil::RenderStage* renderStage = cv->getRenderStage();
        osgUtil::RenderBin::RenderBinList& rbl = 
renderStage->getRenderBinList();
        for (osgUtil::RenderBin::RenderBinList::iterator rbi = rbl.begin(); rbi 
!= rbl.end(); ) {
            if (rbi->second->getSortMode() == 
osgUtil::RenderBin::SORT_BACK_TO_FRONT) {
                savedTransparentBins.insert( std::make_pair( rbi->first, 
rbi->second ) );
                rbl.erase( rbi++ );
            } else {
                ++rbi;
            }
        }
    }

In the cull callback of the camera that should display transparent 
objects :

    virtual void operator()( osg::Node *n, osg::NodeVisitor *nv)
    {
        simgear::EffectCullVisitor* cv = 
dynamic_cast<simgear::EffectCullVisitor*>(nv);
        osg::Camera* camera = static_cast<osg::Camera*>(n);
        cv->traverse( *camera );

        // ....

        // Render saved transparent render bins
        osgUtil::RenderStage* renderStage = cv->getRenderStage();
        osgUtil::RenderBin::RenderBinList& rbl = 
renderStage->getRenderBinList();
        for (osgUtil::RenderBin::RenderBinList::iterator rbi = 
savedTransparentBins.begin();
            rbi != savedTransparentBins.end(); ++rbi ){
            rbl.insert( std::make_pair( rbi->first + 10000, rbi->second ) );
        }
        info->savedTransparentBins.clear();
    }

For this to work correctly, we need the contribution of artists to flag the 
really transparent objects and put them in transparent bins. The other (non
flagged) objects are put unconditionally in the normal RenderBin (with 
alpha test enabled) because many part of a model are opaque but use a 
texture atlas with alpha channel and the transparency detection from the 
loader plugging is not reliable enough, resulting in a lot of false 
positive (opaque object treated as transparent and incorrectly lit).


HTH
Regards,
-Fred
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to