Hi Rômulo,
On 24 April 2018 at 05:21, Rômulo Cerqueira wrote:
> is there any example to implement FBO without cameras? Where can I find it?
Camera's are just the public interface that users have to set up all
the things that are required when doing render to texture. There
isn't any "heavy" con
Folks,
is there any example to implement FBO without cameras? Where can I find it?
...
Thank you!
Cheers,
Rômulo
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=73488#73488
___
osg-users mailing l
Hi Paul,
On 15 February 2013 04:09, Paul Martz wrote:
> Now that I give this some more thought, the concept of post-processing is
> inherently non-spatial, so it really doesn't belong in a scene graph at all.
> Repeatedly "culling" entities that we know will always be rendered is
> redundant at b
Hi,
In my present implementation in the osgRecipes project, I create a list of
pre-render cameras internally as post processing passes, instead
of explicitly in the scene graph. So on the user level they may simply
write:
EffectCompositor* effect = new EffectCompositor;
effect->loadFromEf
Hi,
> the concept of post-processing is inherently non-spatial, so it really
> doesn't belong in a scene graph at all
=> this is why I think we should be able to execute a render pass on an
arbitrary camera : the subgraph of this camera may not have a spatial
organization, but a process-logi
Now that I give this some more thought, the concept of post-processing is
inherently non-spatial, so it really doesn't belong in a scene graph at
all. Repeatedly "culling" entities that we know will always be rendered is
redundant at best. Wouldn't it be better to have a list of dedicated RTT
objec
Hi all,
Thanks for the replies. It is always midnight for me when most community
members are active so I have to reply to you all later in my morning. :-)
Paul, I haven't done any comparisons yet. Post processing steps won't be
too many in a common application, and as Robert says, the cull time c
Hi all,
I'm not sure the CPU cost is really the issue here, but it would be usefull to
have that kind of methods :
executeCamera(osg::Camera*, osg::State*)
executeCameraAsync(osg::Camera*, osg::State*)
=> execute all needed code to render the sub graph of the camera, with async
wait.
=> inputs
With all postprocessing solutions I checked out so far, they all have the same
issue: they lack support of multiple viewports (for instance using
CompositeViewer) and they all need some additional elements (camera, quads,
etc) being added to the scene graph.
My expectation of a postprocessing
Hi Rui,
The cost of traversing an osg::Camera in cull should be very small,
and one can avoid using a separate RenderStage by using a
NESTED_TRAVERSAL. Using a different custom Node to do similar RTT
setup is done for osg::Camera will incur similar CPU and GPU costs, so
I unless there is really s
Hi,
Thanks for sharing this, I've never think to this way, and it's very
interseting.
Not really for performance reason, but also for simplicity reasons.
I'll try to dig that further to see if it can be usefull to implement what is
discussed here : http://forum.openscenegraph.org/viewtopic.php
This is how I've been doing post-rendering effects, too.
However, I have never done any performance benchmarks. My instinct tells me
that this method should have faster cull time than using a Camera, but if
post-rendering cull time makes up only a small percentage of the total cull
time, then I im
Hi Robert, hi all,
I want to raise this topic when reviewing my effect
composting/view-dependent shadow code. As far as I know, most of us use
osg::Camera for rendering to texture and thus the post-processing/deferred
shading work. We attach the camera's sub scene to a texture with attach()
and se
13 matches
Mail list logo