Hi,
I'm implementing a more advanced SSAO algorithm using osgPPU. I need to use
the results more carefully than just multiplying them on top, so the resulting
texture is used as an input to a regular surface fragment shader during the
main rendering pass. Therefore the processing needs to happen exactly after the
depth pre-render pass but before the main rendering pass.
At the moment, the algorithm works nicely, except it runs after the main
render. This means that the results are only available at next frame, and the
lighting lags one frame behind the rest of the scene. This of course causes
unacceptable artefacts if anything is moving.
There are plenty of other scenarios where doing this would also be necessary -
e.g. processing reflections or processing shadow maps before use in some
algorithms such as VSM or CSM. I've looked through all the examples and forum
posts, but everyone seems to be happy with processing as the very last stage.
Perhaps there's some really obvious simple way to do it, but I just can't make
it happen.
So basically, if I have a scene graph roughly like follows:
osgViewer (renders to an FBO)
|
+--- depthCamera (with PRE_RENDER, to an FBO) ------- (stuff)
|
+--- (same stuff)
... where do I put the Processor and what else do I need to do in order to make
its units render AFTER the depthCamera but BEFORE the osgViewer's camera? The
main camera needs to render to an FBO too, because its results are also further
processed with another Processor (could multiple processors be a problem, by
the way)?
My current attempts are as follows (disregard this if there's some nice way to
do it instead). I've tried simply adding the SSAO-processor as depthCamera's
child. I'm not very familiar with all the renderbin/stage stuff, but the idea
is that it would be included in the pre-render pass and thus rendered before
the main pass begins. This actually almost works, though possibly by accident.
Looking at glIntercept, the processor does indeed run before the main camera
and does what is's supposed to.
However, this causes another problem which I don't even begin to understand
(which may or may not be related to osgPPU). Everything would be perfect,
except for some reason now there's an extra call for
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0) before the main osgViewer camera
renders (apparently after already having earlier bound the correct FBO). This
makes it draw directly to the screen, which is wrong. It does have its FBO
attachments and textures and they're even cleared correctly before rendering,
but for some unexplained reason it makes this binding that ruins everything.
I've spent hours stepping around in OSG's sources and while I can see where
this call is made, I have no idea why. It's in FrameBufferObject.cpp,
apply(State, BindTarget), this part:
if (_attachments.empty())
{
ext->glBindFramebufferEXT(target, 0);
return;
}
(called via osg::State::applyGlobalDefaultAttribute(), the buffer attachment
list for osgViewer's camera isn't empty - is it applying some default state
settings when they're not wanted?)
Also another approach of adding a group with renderbin -1 to the main level and
parenting the processor to this gives pretty much the same results and problems.
So... any ideas?
Thanks,
Miika
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=15945#15945
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org