Hi Sergey and everyone,
I've been trying to setup a "ping pong rendering" with multiple cameras as
previously mentioned... unfortunately this is not really working :-) : I
guess I just can't render to one texture / use it as input / re-render again
as easily as I expected.
So now I have to switch to another option. I have some experience with
osgPPU already, but I only used it in a "regular" wait (with Texture2D,
rendering a couple of gauss passes in X and Y...) : here the situation is
much more tricky: I have to ping pong between texture2DArrays with Texture3D
samplers and with much more passes : it might be possible with osgPPU, but I
have the feeling I could spend a lot of time to get everything properly set
up first... (I already spent a log of time trying to setup something with
regular OSG cameras, so I would like to use that work if possible).
So my idea is: how about if I create a simple new StateAttribute class to
call glDrawBuffer() ?? This stateAttribute would be as simple as the
AlphaFunc StateAttribute for instance...: Would this make sense ? Or do you
think I would be waiting a few additional hours here ? :-)
Then the plan would be to build a graph such as:
Camera (PRE-Render, attachment color0=first Texture2DArray, attachment
color1=second texture2D Array, rendering shader program)
| -- Screen Quad 0 with stateset ( stateattribute
glDrawBuffer=GL_COLOR_ATTACHMENT1_EXT)
| -- Screen Quad 1 with stateset ( stateattribute
glDrawBuffer=GL_COLOR_ATTACHMENT0_EXT)
| -- Screen Quad 2 with stateset ( stateattribute
glDrawBuffer=GL_COLOR_ATTACHMENT1_EXT)
| -- etc
... I might have the ensure the quads are rendered in that exact order, but
I think a simple bin number settings should be enough to do that no ?
If this is really working as expected, this would also mean I would not have
to use more that one camera to perform all the needed passes... So I really
think this is the next option I should try, except is someone already knows
why this won't work :-)
Cheers,
Manu.
2011/9/23 Sergey Polischuk <[email protected]>
> Hi, Manu
>
> Cant comment on performance gain with pure gl, but you can write small test
> to see possible gain. Osg dont share fbos with cameras and dont keep track
> of vbo's binded so most of gain would be from reducing number of
> glBindFramebuffer...(..) calls and geometry setup (vertex\index\texcoord
> arrays\pointers).*
> *
>
> It is possible to create whatever objects you need inside
> renderImplementation, also you need to info osg::State about all gl state
> that osg::State keep track of, that you change with opengl calls (look at
> osg::State::haveApplied...(...) ), or use osg::State functionality instead
> of opengl calls (look at applyMode(..)/applyAttribute(..)), though not all
> gl calls can be done through osg::State interface. Instead you can use pure
> gl and call osg::State::dirtyAll...() afterwards. Dont forget to restore fbo
> used by osg if you change fbo binding inside drawImplementation(). Also
> there could be issues with multiple opengl contexts handling if you use
> multiple screens, so you may need to track current contextID and
> create\store opengl handles for each contextID used, and pick right handles
> for current contextID.
>
> Also with pure gl approach you can get rid of fbo and drawBuffers switch if
> you can use NV_texture_barrier extension - you can create texture array with
> two textures, reading from one layer and writing to another based on uniform
> value or instanceID if you use instanced drawing for sqreen aligned quads.
>
> Cheers,
> Sergey.
> 23.09.2011, 15:50, "Emmanuel Roche" <[email protected]>:
>
> Thanks Sergey,
>
> Right now I'm using multiple cameras rendered one after another... but I
> have the feeling the performances are not too good (with 17 pre render
> cameras...). Do you think I'm right to assume I could really improve the
> performances by using pure GL code instead and create only one FBO ?
>
> as a matter of fact, my second idea would be to encapsulate all this using
> approximately the code snippet of the previous mail in a special drawable.
> But i'm wondering if this will not lead to other issues (I'm not that
> familiar with pure GL code and creating special drawables): is it possible
> to allocate by FBO and everything I could need from within the
> renderImplementation of a drawable ? Could somebody predict a big issue
> doing this ?
>
> Cheers,
> Manu.
>
>
> 2011/9/23 Sergey Polischuk <[email protected]>
>
> Hi, Manu
>
> There are no convenient support for renderbuffer ping-ponging. You can
> either use pure gl, graph with lots of cameras setup with correct render
> order and output textures, or osgppu graph with chain of units
>
> Cheers,
> Sergey.
> 22.09.2011, 16:28, "Emmanuel Roche" <[email protected]>:
>
>
> Hi everyone,
>
> I have a question regarding FBO usage and Draw/Read buffer changes:
>
> - I have one pre render camera using FBO and with 2 textures attached (to
> COLOR_BUFFER0 and COLOR_BUFFER1)
> - Under that camera I would like to add multiple screen aligned quads that
> would use the attached textures this way: (call them tex0 and tex1):
> - quad0 would using the tex0 as input and draw on tex1
> - quad1 would then use tex1 as input and draw on tex0
> - quad2 would using the tex0 as input and draw on tex1
> - quad3 would then use tex1 as input and draw on tex0
> - etc.
>
> This is possible in pure OpenGL using something like that:
>
> * glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2);
> glUseProgram(fftx->program);
> glUniform1i(glGetUniformLocation(fftx->program, "nLayers"), choppy ? 5
> : 3);
> for (int i = 0; i < PASSES; ++i) {
> glUniform1f(glGetUniformLocation(fftx->program, "pass"), float(i +
> 0.5) / PASSES);
> if (i%2 == 0) {
> glUniform1i(glGetUniformLocation(fftx->program, "imgSampler"),
> FFT_A_UNIT);
> glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT);
> } else {
> glUniform1i(glGetUniformLocation(fftx->program, "imgSampler"),
> FFT_B_UNIT);
> glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
> }
> drawQuad();
> }
> glUseProgram(ffty->program);
> glUniform1i(glGetUniformLocation(ffty->program, "nLayers"), choppy ? 5
> : 3);
> for (int i = PASSES; i < 2 * PASSES; ++i) {
> glUniform1f(glGetUniformLocation(ffty->program, "pass"), float(i -
> PASSES + 0.5) / PASSES);
> if (i%2 == 0) {
> glUniform1i(glGetUniformLocation(ffty->program, "imgSampler"),
> FFT_A_UNIT);
> glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT);
> } else {
> glUniform1i(glGetUniformLocation(ffty->program, "imgSampler"),
> FFT_B_UNIT);
> glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
> }
> drawQuad();
> }
>
> glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);*
>
> .. but I can't figure out how to do something equivalent to the internal
> calls to glDrawBuffer in the previous snippet when I have a single camera.
> (as calling setDrawBuffer() is done once and for all before rendering
> anything... Any idea what could be worth trying here ?
>
> Cheers,
> Manu.
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org