Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2018-04-24 Thread Robert Osfield
Hi Rômulo,

On 24 April 2018 at 05:21, Rômulo Cerqueira  wrote:
> is there any example to implement FBO without cameras? Where can I find it?

Camera's are just the public interface that users have to set up all
the things that are required when doing render to texture.  There
isn't any "heavy" consequences for using osg::Camera for RTT.

Could you explain what the problem you perceive as something you need
to solve by not using osg::Camera for RTT?

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2018-04-23 Thread Rômulo Cerqueira
Folks,

is there any example to implement FBO without cameras? Where can I find it?

... 

Thank you!

Cheers,
Rômulo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=73488#73488





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-15 Thread Aurelien Albert
Hi,


 the concept of post-processing is inherently non-spatial, so it really 
 doesn't belong in a scene graph at all


= this is why I think we should be able to execute a render pass on an 
arbitrary camera : the subgraph of this camera may not have a spatial 
organization, but a process-logic organization


 Wouldn't it be better to have a list of dedicated RTT objects as described by 
 Rui, and process them as a Camera post-draw callback


= this also my idea : have a dedicated executeCamera method which take a 
camera and a state as arguments : with that we can call it from a final / post 
draw call back.

Rather than a list of dedicated RTT objects which will never cover all the 
diferent use cases, I think this is better to re-use the camera class with it's 
subgraph, which already works well : by building different small sub-graph we 
can implement different processing (with a render quad as input, or with a 
point cloud, or with anything else).



Aurelien

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52688#52688





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-15 Thread Wang Rui
Hi,

In my present implementation in the osgRecipes project, I create a list of
pre-render cameras internally as post processing passes, instead
of explicitly in the scene graph. So on the user level they may simply
write:

EffectCompositor* effect = new EffectCompositor;
effect-loadFromEffectFile( ssao.xml );
effect-addChild( subgraph );

And during the cull traversal, cameras (and screen quads) are added
directly to the cull visitor:

cv-push*();
camera-accept(*cv);
...
cv-pop*();

This simplifies the class interface but doesn't change the render
stage/render bins we current use. But at least, the post processing passes
are not traversable by a scene node visitor, thus not part of the scene
graph and will not affect intersection test and other updating work, as
Paul and Aurelien point out..

But we may not be able to directly migrate these to a draw callback,
because CullVisitor doesn't really render anything, but will create the
render-graph for SceneView. A possible idea in my mind is to execute the
FrameBufferObject::apply() method along with Quad::drawImplementation()
manually in the draw callback, which means to have another complete
deferred draw process, besides the main renderstage/renderbin back-end. I
don't know if it could be a good choice because the implementation may
finally look like an OpenGL-style one, not an OSG composition.

Thanks,

Wang Rui




2013/2/15 Aurelien Albert aurelien.alb...@alyotech.fr

 Hi,


  the concept of post-processing is inherently non-spatial, so it really
 doesn't belong in a scene graph at all


 = this is why I think we should be able to execute a render pass on an
 arbitrary camera : the subgraph of this camera may not have a spatial
 organization, but a process-logic organization


  Wouldn't it be better to have a list of dedicated RTT objects as
 described by Rui, and process them as a Camera post-draw callback


 = this also my idea : have a dedicated executeCamera method which take
 a camera and a state as arguments : with that we can call it from a final /
 post draw call back.

 Rather than a list of dedicated RTT objects which will never cover all the
 diferent use cases, I think this is better to re-use the camera class with
 it's subgraph, which already works well : by building different small
 sub-graph we can implement different processing (with a render quad as
 input, or with a point cloud, or with anything else).



 Aurelien

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=52688#52688





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-15 Thread Robert Osfield
Hi Paul,

On 15 February 2013 04:09, Paul Martz skewmat...@gmail.com wrote:
 Now that I give this some more thought, the concept of post-processing is
 inherently non-spatial, so it really doesn't belong in a scene graph at all.
 Repeatedly culling entities that we know will always be rendered is
 redundant at best. Wouldn't it be better to have a list of dedicated RTT
 objects as described by Rui, and process them as a Camera post-draw
 callback?

Post processing techniques introduce some introducing conceptual and
implementation issues, and as you point out conceptually a post
process isn't directly part of the scene, rather it's more closely
associated with how you view the scene.  This conceptually aspect puts
it's either up in the viewer and outside of the scene graph, or
between the viewer and scene as an intermediate layer.

Implementation wise the intermediate layer could be done in the viewer
itself, but the viewer tends to have a static list of cameras, and the
post processing effects need to be done per viewer camera so placing
the cameras directly in the viewer might pose problems in itself if
the cameras at viewer or in the intermediate level at at all dynamic.
This issue would suggest nested the intermediate layer below a viewer
camera.

Performance wise viewer cameras can been threaded - both when handling
multiple contexts and when handling multiple cameras on a single
context.  The later might be of interest here ideally you'd want to
interleave the cull and draw dispatch of multiple cameras on a single
context such that cull0 runs and completes, then cull1 and draw0 run
in parallel, then cull2 and draw1 run in parallel etc.  For best
performance we'd want to take advantage for this in the intermediate
level as well.  To here we have a motivation for putting the post
processing cameras into viewer.

Or... have a scheme where the viewer's level Camera's have their
Renderer collects the nested Cameras from the intermediate level and
then do threading on these.  This might achieve the best of both
worlds.  A twist on this might be to have the CullVisitor spot places
where it can thread cull and draw dispatch as it hits Camera's in the
scene graph.  The later approach would have the advantage of working
with existing scene graphs and NodeKits like osgShadow.

On the topic of re-using RenderBin's when doing multi-pass, this is
partially possible right now, but you really have to know how the
rendering back end works and the constraints you have to work within
to prevent everything getting out of sync.  In most cases it's simply
not possible to reuse RenderBin's as even if the same objects make it
into the RenderBin their state will mostly be different, and the only
way you know what the state is is by collecting the state in the cull
traversal so you still have to do the cull traversal and build a
unique StateGraph and with it unique RenderLeaf and which also require
a unique RenderBin.  The times when sharing RenderBin is possible
really is very limited and has to be assessed on a case by case basis.

If we do want to explore the possibility of greater re-use of cull
results then I think we best look at extending CullVisitor and the
rendering back end in a way that enables new ways of managing things.
One could perhaps provide a convenience methods that provided similar
set up functionality to an osg::Camera and make it easier to use this
type of functionality, this would make it lighter weight to avoid
using osg::Camera, but this is added complexity that we would have to
look very carefully at as being properly justified.   It might be that
a better solution might be to enable easier management of osg::Camera
that are used to implement these techniques, so that the user front
end doesn't need to worry about the how the scene graph is
implementing a post processing effect, it just configures the
interface it needs and the back end goes away and does what it needs.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Wang Rui
Hi Robert, hi all,

I want to raise this topic when reviewing my effect
composting/view-dependent shadow code. As far as I know, most of us use
osg::Camera for rendering to texture and thus the post-processing/deferred
shading work. We attach the camera's sub scene to a texture with attach()
and set render target to FBO and so on. It works fine so far in my client
work.

But these days I found another way to render scene to FBO based
texture/image:

First I create a node (include input textures, shaders, and the sub scene
or a screen sized quad) and apply FBO and Viewport as its state attributes:

osg::ref_ptrosg::Texture2D tex = new osg::Texture2D;
tex-setTextureSize( 1024, 1024 );

osg::ref_ptrosg::FrameBufferObject fbo = new osg::FrameBufferObject;
fbo-setAttachment( osg::Camera::COLOR_BUFFER,
osg::FrameBufferAttachment(tex) );

node-getOrCreateStateSet()-setAttributeAndModes( fbo.get() );
node-getOrCreateStateSet()-setAttributeAndModes( new osg::Viewport(0,
0, 1024, 1024) );

Then if we need more deferred passes, we can add more nodes with screen
sized quads and set texOutput as texture attributes. The intermediate
passes require fixed view and projection matrix. So we can add it a cull
callback like:

cv-pushModelViewMatrix( new RefMatrix(Matrix()),
Transform::ABSOLUTE_RF );
cv-pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0,
1.0)) );

each_child-accept( nv );

cv-popProjectionMatrix();
cv-popModelViewMatrix();

This works well in my initial tests and it won't require a list of
osg::Camera classes. I think this would be a light-weighted way for the
post-processing work as it won't create multiple RenderStages at the
back-end and will reduce the possibility of having too many nests of
cameras in a scene graph.

Do you think it useful to have such a class? User input a sub-scene or any
texture; the class uses multiple passes to process it and output to a
result texture. The class won't have potential cameras for the RTT work,
and can be placed anywhere in the scene graph as a deferred pipeline
implementer, or a pure GPU-based image filter.

I'd like to rewrite my effect compositor implementation with the new idea
if it is considered necessary, or I will forget it and soon be getting
ready to submit both the deferred shading pipeline and the new VDSM code in
the following week. :-)

Cheers,

Wang Rui
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Paul Martz
This is how I've been doing post-rendering effects, too.

However, I have never done any performance benchmarks. My instinct tells me
that this method should have faster cull time than using a Camera, but if
post-rendering cull time makes up only a small percentage of the total cull
time, then I imagine the performance benefit would be difficult to measure.

Have you done any performance comparisons against equivalent use of Camera
nodes?


On Thu, Feb 14, 2013 at 8:33 AM, Wang Rui wangra...@gmail.com wrote:

 Hi Robert, hi all,

 I want to raise this topic when reviewing my effect
 composting/view-dependent shadow code. As far as I know, most of us use
 osg::Camera for rendering to texture and thus the post-processing/deferred
 shading work. We attach the camera's sub scene to a texture with attach()
 and set render target to FBO and so on. It works fine so far in my client
 work.

 But these days I found another way to render scene to FBO based
 texture/image:

 First I create a node (include input textures, shaders, and the sub scene
 or a screen sized quad) and apply FBO and Viewport as its state attributes:

 osg::ref_ptrosg::Texture2D tex = new osg::Texture2D;
 tex-setTextureSize( 1024, 1024 );

 osg::ref_ptrosg::FrameBufferObject fbo = new osg::FrameBufferObject;
 fbo-setAttachment( osg::Camera::COLOR_BUFFER,
 osg::FrameBufferAttachment(tex) );

 node-getOrCreateStateSet()-setAttributeAndModes( fbo.get() );
 node-getOrCreateStateSet()-setAttributeAndModes( new
 osg::Viewport(0, 0, 1024, 1024) );

 Then if we need more deferred passes, we can add more nodes with screen
 sized quads and set texOutput as texture attributes. The intermediate
 passes require fixed view and projection matrix. So we can add it a cull
 callback like:

 cv-pushModelViewMatrix( new RefMatrix(Matrix()),
 Transform::ABSOLUTE_RF );
 cv-pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0,
 1.0)) );

 each_child-accept( nv );

 cv-popProjectionMatrix();
 cv-popModelViewMatrix();

 This works well in my initial tests and it won't require a list of
 osg::Camera classes. I think this would be a light-weighted way for the
 post-processing work as it won't create multiple RenderStages at the
 back-end and will reduce the possibility of having too many nests of
 cameras in a scene graph.

 Do you think it useful to have such a class? User input a sub-scene or any
 texture; the class uses multiple passes to process it and output to a
 result texture. The class won't have potential cameras for the RTT work,
 and can be placed anywhere in the scene graph as a deferred pipeline
 implementer, or a pure GPU-based image filter.

 I'd like to rewrite my effect compositor implementation with the new idea
 if it is considered necessary, or I will forget it and soon be getting
 ready to submit both the deferred shading pipeline and the new VDSM code in
 the following week. :-)

 Cheers,

 Wang Rui



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




-- 
Paul Martz
Skew Matrix Software LLC
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Aurelien Albert
Hi,

Thanks for sharing this, I've never think to this way, and it's very 
interseting.

Not really for performance reason, but also for simplicity reasons.

I'll try to dig that further to see if it can be usefull to implement what is 
discussed here : http://forum.openscenegraph.org/viewtopic.php?t=11577

(using renderbin to configure current FBO)

One use case could be one of the applications I'm working on :

- I have a scene with multiple objects
- I render the scene using HDR shader : scene = FBO 1
- Post process (tone mapping) : FBO 1 = FBO 2
- render GUI elements with normal shader : FBO 2 + GUI = FBO 3

I would like to move an object from HDR to normal rendering and vice-versa by 
simply specify its RenderBin as FBO 1 or FBO 3.

I've already have a shader system with automatically switch between HDR and 
normal rendering, but for now I have to move the object from a sub graph to 
another, which breaks the graph logics for user and also for events o 
intersectors.

Thank you!

Cheers,
Aurelien

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52679#52679





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Robert Osfield
Hi Rui,

The cost of traversing an osg::Camera in cull should be very small,
and one can avoid using a separate RenderStage by using a
NESTED_TRAVERSAL.  Using a different custom Node to do similar RTT
setup is done for osg::Camera will incur similar CPU and GPU costs, so
I unless there is really sound reason for providing an alternative
then I'd rather just stick with osg::Camera as it'll just be code to
maintenance and more code to teach people how to use with the
additional hurdle of having two ways to do the same thing.  If need be
perhaps osg::Camera could be extended if it isn't able to handle all
the current required usage cases.

As for scalability, you can't have too many osg::Camera nested or not
in the scene graph, the only limit is the amount of memory available
and your imagination, both limits apply to any scheme you come up with
for doing something similar to osg::Camera.  While osg::Camera can do
lots of tasks, in itself it isn't a large object, it's only the
buffer/textures that you attach that are significant in size, and this
applies too approaches.

Right now I'm rather unconvinced there is pressing need for an alternative.

Robert.

On 14 February 2013 15:33, Wang Rui wangra...@gmail.com wrote:
 Hi Robert, hi all,

 I want to raise this topic when reviewing my effect
 composting/view-dependent shadow code. As far as I know, most of us use
 osg::Camera for rendering to texture and thus the post-processing/deferred
 shading work. We attach the camera's sub scene to a texture with attach()
 and set render target to FBO and so on. It works fine so far in my client
 work.

 But these days I found another way to render scene to FBO based
 texture/image:

 First I create a node (include input textures, shaders, and the sub scene or
 a screen sized quad) and apply FBO and Viewport as its state attributes:

 osg::ref_ptrosg::Texture2D tex = new osg::Texture2D;
 tex-setTextureSize( 1024, 1024 );

 osg::ref_ptrosg::FrameBufferObject fbo = new osg::FrameBufferObject;
 fbo-setAttachment( osg::Camera::COLOR_BUFFER,
 osg::FrameBufferAttachment(tex) );

 node-getOrCreateStateSet()-setAttributeAndModes( fbo.get() );
 node-getOrCreateStateSet()-setAttributeAndModes( new osg::Viewport(0,
 0, 1024, 1024) );

 Then if we need more deferred passes, we can add more nodes with screen
 sized quads and set texOutput as texture attributes. The intermediate passes
 require fixed view and projection matrix. So we can add it a cull callback
 like:

 cv-pushModelViewMatrix( new RefMatrix(Matrix()), Transform::ABSOLUTE_RF
 );
 cv-pushProjectionMatrix( new RefMatrix(Matrix::ortho2D(0.0, 1.0, 0.0,
 1.0)) );

 each_child-accept( nv );

 cv-popProjectionMatrix();
 cv-popModelViewMatrix();

 This works well in my initial tests and it won't require a list of
 osg::Camera classes. I think this would be a light-weighted way for the
 post-processing work as it won't create multiple RenderStages at the
 back-end and will reduce the possibility of having too many nests of cameras
 in a scene graph.

 Do you think it useful to have such a class? User input a sub-scene or any
 texture; the class uses multiple passes to process it and output to a result
 texture. The class won't have potential cameras for the RTT work, and can be
 placed anywhere in the scene graph as a deferred pipeline implementer, or a
 pure GPU-based image filter.

 I'd like to rewrite my effect compositor implementation with the new idea if
 it is considered necessary, or I will forget it and soon be getting ready to
 submit both the deferred shading pipeline and the new VDSM code in the
 following week. :-)

 Cheers,

 Wang Rui



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Daniel Schmid
With all postprocessing solutions I checked out so far, they all have the same 
issue: they lack support of multiple viewports (for instance using 
CompositeViewer) and they all need some additional elements (camera, quads, 
etc) being added to the scene graph. 

My expectation of a postprocessing framework is this:
- It doesn't impact the scenegraph, since it is something that happens after 
the whole geometry is rendered, therefore it must be completely separated.
- It must be compatible with multiple views

Why is there no postprocessing framework that can be attached/inserted into a 
FinalDraw or PostDraw call of the main camera? This is the place I expect 
postpro to happen!

Right now I'm using osgPPU and I modified it to work with CompositeViewer and 
multiple views, but still I'm forced to have one postprocessing camera 
(including its whole unit pipeline) per view stored in the scenegraph. 

Cheers,
Daniel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52683#52683





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Aurelien Albert
Hi all,

I'm not sure the CPU cost is really the issue here, but it would be usefull to 
have that kind of methods :

executeCamera(osg::Camera*, osg::State*)
executeCameraAsync(osg::Camera*, osg::State*)

= execute all needed code to render the sub graph of the camera, with async 
wait.
= inputs are controled using the StateSet of the camera + subgraph
= outputs are controled by the camera render target

So, we can do processing like that in a PostDraw/FinalDraw callback :


Code:
executeCamera(cameraBlur, renderinfo.getState());

if (something)
{
executeCamera(cameraFX, renderinfo.getState())
}

float x = computeSomething();

cameraToneMapping-getStateSet-setUniform(x);
executeCamera(cameraToneMapping, renderinfo.getState())



with :

cameraBlur : a small graph : a render quad, a shader and main scene bound as 
input texture

cameraFX : similar, another FX

cameraToneMapping : similar, execute a tone mapping controled via uniform x

Here we can mix GLSL processing and CPU control code, for advanced processing 
it's very usefull.
It's also difficult to achieve using a standard graph, without playing a lot 
with callbacks and render order.

Aurelien

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52684#52684





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Wang Rui
Hi all,

Thanks for the replies. It is always midnight for me when most community
members are active so I have to reply to you all later in my morning. :-)

Paul, I haven't done any comparisons yet. Post processing steps won't be
too many in a common application, and as Robert says, the cull time cost of
a camera and a normal node won't be too different, so I think the results
may be difficult to measure.

Aurelien's past idea (using RenderBins directly) also interests me but it
will change the back-end dramatically. I'm also focusing on implementing a
complete deferred pipeline including HDR, SSAO, color grading and AA work,
and finally merge it with normal scenes like the HUD GUI.
The automatic switch between DS and normal pipeline is done by changing
whole 'technique' instead of moving child nodes, which may be found in the
osgRecipes project I'm maintaining.

But I don't think it easy to implement an executeCameraAsync() method at
present. As OSG is a lazy rendering system and one can hardly insert some
CPU side computation into FBO cameras. Maybe it could be done by using pre-
and post-draw callbacks of a specified camera.

I also agree with Daniel's second opinion that the pipeline should be
compatible with multi-views. As a node in the scene graph we can easily do
this by sharing the same root node in different views. For the first
opinion, because we also have nodes that should not be affected by the
post-processing effects (like the GUI, HUD display), and developers may
require multiple post effects in the same scene graph (e.g., draw dynamic
and static objects differently), I don't think it convincing to totally
separate the post processing framework and place it in draw callbacks or
viewer's graphics operations.

So, in conclusion, I will agree with Robert that OSG itself don't need an
additional RTT node at present and will use cameras to perform every
passes, which is already proved in my client work to be compatible with
most current OSG functionalities including the VDSM shadows, and some
external libraries like SilverLining and osgEarth.

I will try to tidy and submit my current code in the next week as well as a
demo scene. And then I will modify the osgRecipes project to use the new
idea flashed in my mind to find the pros and cons of it.

Thanks,

Wang Rui
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cheaper way to implement rendering-to-texture and post-processing pipeline?

2013-02-14 Thread Paul Martz
Now that I give this some more thought, the concept of post-processing is
inherently non-spatial, so it really doesn't belong in a scene graph at
all. Repeatedly culling entities that we know will always be rendered is
redundant at best. Wouldn't it be better to have a list of dedicated RTT
objects as described by Rui, and process them as a Camera post-draw
callback?

Just thinking out loud...


On Thu, Feb 14, 2013 at 5:50 PM, Wang Rui wangra...@gmail.com wrote:

 Hi all,

 Thanks for the replies. It is always midnight for me when most community
 members are active so I have to reply to you all later in my morning. :-)

 Paul, I haven't done any comparisons yet. Post processing steps won't be
 too many in a common application, and as Robert says, the cull time cost of
 a camera and a normal node won't be too different, so I think the results
 may be difficult to measure.

 Aurelien's past idea (using RenderBins directly) also interests me but it
 will change the back-end dramatically. I'm also focusing on implementing a
 complete deferred pipeline including HDR, SSAO, color grading and AA work,
 and finally merge it with normal scenes like the HUD GUI.
 The automatic switch between DS and normal pipeline is done by changing
 whole 'technique' instead of moving child nodes, which may be found in the
 osgRecipes project I'm maintaining.

 But I don't think it easy to implement an executeCameraAsync() method at
 present. As OSG is a lazy rendering system and one can hardly insert some
 CPU side computation into FBO cameras. Maybe it could be done by using pre-
 and post-draw callbacks of a specified camera.

 I also agree with Daniel's second opinion that the pipeline should be
 compatible with multi-views. As a node in the scene graph we can easily do
 this by sharing the same root node in different views. For the first
 opinion, because we also have nodes that should not be affected by the
 post-processing effects (like the GUI, HUD display), and developers may
 require multiple post effects in the same scene graph (e.g., draw dynamic
 and static objects differently), I don't think it convincing to totally
 separate the post processing framework and place it in draw callbacks or
 viewer's graphics operations.

 So, in conclusion, I will agree with Robert that OSG itself don't need an
 additional RTT node at present and will use cameras to perform every
 passes, which is already proved in my client work to be compatible with
 most current OSG functionalities including the VDSM shadows, and some
 external libraries like SilverLining and osgEarth.

 I will try to tidy and submit my current code in the next week as well as
 a demo scene. And then I will modify the osgRecipes project to use the new
 idea flashed in my mind to find the pros and cons of it.

 Thanks,

 Wang Rui







 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




-- 
Paul Martz
Skew Matrix Software LLC
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org