Now that I give this some more thought, the concept of post-processing is
inherently non-spatial, so it really doesn't belong in a scene graph at
all. Repeatedly "culling" entities that we know will always be rendered is
redundant at best. Wouldn't it be better to have a list of dedicated RTT
objects as described by Rui, and process them as a Camera post-draw
callback?

Just thinking out loud...


On Thu, Feb 14, 2013 at 5:50 PM, Wang Rui <[email protected]> wrote:

> Hi all,
>
> Thanks for the replies. It is always midnight for me when most community
> members are active so I have to reply to you all later in my morning. :-)
>
> Paul, I haven't done any comparisons yet. Post processing steps won't be
> too many in a common application, and as Robert says, the cull time cost of
> a camera and a normal node won't be too different, so I think the results
> may be difficult to measure.
>
> Aurelien's past idea (using RenderBins directly) also interests me but it
> will change the back-end dramatically. I'm also focusing on implementing a
> complete deferred pipeline including HDR, SSAO, color grading and AA work,
> and finally merge it with normal scenes like the HUD GUI.
> The automatic switch between DS and normal pipeline is done by changing
> whole 'technique' instead of moving child nodes, which may be found in the
> osgRecipes project I'm maintaining.
>
> But I don't think it easy to implement an executeCameraAsync() method at
> present. As OSG is a lazy rendering system and one can hardly insert some
> CPU side computation into FBO cameras. Maybe it could be done by using pre-
> and post-draw callbacks of a specified camera.
>
> I also agree with Daniel's second opinion that the pipeline should be
> compatible with multi-views. As a node in the scene graph we can easily do
> this by sharing the same root node in different views. For the first
> opinion, because we also have nodes that should not be affected by the
> post-processing effects (like the GUI, HUD display), and developers may
> require multiple post effects in the same scene graph (e.g., draw dynamic
> and static objects differently), I don't think it convincing to totally
> separate the post processing framework and place it in draw callbacks or
> viewer's graphics operations.
>
> So, in conclusion, I will agree with Robert that OSG itself don't need an
> additional RTT node at present and will use cameras to perform every
> passes, which is already proved in my client work to be compatible with
> most current OSG functionalities including the VDSM shadows, and some
> external libraries like SilverLining and osgEarth.
>
> I will try to tidy and submit my current code in the next week as well as
> a demo scene. And then I will modify the osgRecipes project to use the new
> idea flashed in my mind to find the pros and cons of it.
>
> Thanks,
>
> Wang Rui
>
>
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>


-- 
Paul Martz
Skew Matrix Software LLC
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to