Re: [osg-users] Setting up FBO in drawable
Hi, I can't help you with your specific drawable question, but what would you like to achieve? In the osggameoflife example there is an example of ping-pong using multiple cameras and switches. You can also swap output textures if they are exactly the same using a callback. See here for inspiration: http://code.google.com/p/flitr/source/browse/trunk/examples/keep_history_pass/keep_history_pass.cpp cheers jp On 28/09/2011 10:45, Emmanuel Roche wrote: Hi everyone, I'm trying to setup an pure OpenGL FBO with render to texture target in an OSG drawable. But I just can't figure out how to do that properly (eg. how to isolate those pure openGL calls from the rest of the OSG scene). in my drawa implementation I just have: virtual void drawImplementation(osg::RenderInfo info) const { OSG_NOTICE Drawing PingPongDrawable...; osg::State* state = info.getState(); const unsigned int contextID = state-getContextID(); if(!_initialized !init(contextID,*state)) { OSG_WARN Failed FBO setup!; return; } state-checkGLErrors(end of PingPongDrawable drawing.); } So i'm really just calling an init function once to jus try to _create_ an FBO... I didn't even start using it..., the code of the init function is as follow: bool init(unsigned int contextID, osg::State state) const { const FBOExtensions* fbo_ext = FBOExtensions::instance(contextID,true); const osg::Texture2DArray::Extensions* t2darray_ext = osg::Texture2DArray::getExtensions(contextID,true); // Push attribs to avoid collisions with existing OSG scene ? glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT | GL_TEXTURE_BIT | GL_ENABLE_BIT); state.checkGLErrors(Before PPD init.); // Prepare the target texture for the FBO: state.setActiveTextureUnit(1); state.checkGLErrors(Activating texture slot 1); int FFT_SIZE=256; GLuint fftaTex = 0; glGenTextures(1, fftaTex); glBindTexture(GL_TEXTURE_2D_ARRAY_EXT, fftaTex); glTexParameteri(GL_TEXTURE_2D_ARRAY_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D_ARRAY_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D_ARRAY_EXT, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D_ARRAY_EXT, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D_ARRAY_EXT, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16); t2darray_ext-glTexImage3D(GL_TEXTURE_2D_ARRAY_EXT, 0, GL_RGBA16F_ARB, FFT_SIZE, FFT_SIZE, 5, 0, GL_RGBA, GL_FLOAT, NULL); fbo_ext-glGenerateMipmap(GL_TEXTURE_2D_ARRAY_EXT); state.checkGLErrors(preparing target texture); // Initialize the FBO fbo_ext-glGenFramebuffers(1, _fftFbo); state.checkGLErrors(Generating FBO); fbo_ext-glBindFramebuffer(GL_FRAMEBUFFER_EXT, _fftFbo); state.checkGLErrors(Bind Framebuffer in init.); #ifdef ATTACH_TEXTURE fbo_ext-glFramebufferTexture(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); state.checkGLErrors(FramebufferTexture setup); #endif GLuint fboId = state.getGraphicsContext() ? state.getGraphicsContext()-getDefaultFboId() : 0; fbo_ext-glBindFramebuffer(GL_FRAMEBUFFER_EXT, fboId); if(fbo_ext-glCheckFramebufferStatus(GL_FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT) { OSG_WARN Error while setting up Pingpong FBO.; } state.checkGLErrors(end of Framebuffer settings); glBindTexture( GL_TEXTURE_2D_ARRAY_EXT, 0 ); glPopAttrib(); _initialized = true; return true; } Adding such a drawable in my scene, i don't have any problem as long as ATTACH_TEXTURE is *undefined*. But when I define this, I still don't have any error reported by the drawable itself (all the checkGLErrors I inserted). But then getcontinous list of Warning: detected OpenGL error 'invalid operation' at after RenderBin::draw(..) messages :-( = Any idea what I'm doing wrong here ? How can I enforce the isolation between those openGL calls and what's left from the OSG scene ? after all, since this init function is called only once, there should not be any continous warning report if it didn't have a side effect outside of this drawable encapsulation... Thanks for you help !! I really feel desperated now... :'( Manu. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ___ osg-users mailing list osg-users@lists.openscenegraph.org
Re: [osg-users] Setting up FBO in drawable
Thanks J.P, but actually I know the gameoflife example almost by heart already and this won't fit the bill: I need a real single pass ping pong rendering here if I want to achieve good performances. Cheers, Manu. 2011/9/28 J.P. Delport jpdelp...@csir.co.za Hi, I can't help you with your specific drawable question, but what would you like to achieve? In the osggameoflife example there is an example of ping-pong using multiple cameras and switches. You can also swap output textures if they are exactly the same using a callback. See here for inspiration: http://code.google.com/p/**flitr/source/browse/trunk/** examples/keep_history_pass/**keep_history_pass.cpphttp://code.google.com/p/flitr/source/browse/trunk/examples/keep_history_pass/keep_history_pass.cpp cheers jp On 28/09/2011 10:45, Emmanuel Roche wrote: Hi everyone, I'm trying to setup an pure OpenGL FBO with render to texture target in an OSG drawable. But I just can't figure out how to do that properly (eg. how to isolate those pure openGL calls from the rest of the OSG scene). in my drawa implementation I just have: virtual void drawImplementation(osg::**RenderInfo info) const { OSG_NOTICE Drawing PingPongDrawable...; osg::State* state = info.getState(); const unsigned int contextID = state-getContextID(); if(!_initialized !init(contextID,*state)) { OSG_WARN Failed FBO setup!; return; } state-checkGLErrors(end of PingPongDrawable drawing.); } So i'm really just calling an init function once to jus try to _create_ an FBO... I didn't even start using it..., the code of the init function is as follow: bool init(unsigned int contextID, osg::State state) const { const FBOExtensions* fbo_ext = FBOExtensions::instance(**contextID,true); const osg::Texture2DArray::**Extensions* t2darray_ext = osg::Texture2DArray::**getExtensions(contextID,true); // Push attribs to avoid collisions with existing OSG scene ? glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT | GL_TEXTURE_BIT | GL_ENABLE_BIT); state.checkGLErrors(Before PPD init.); // Prepare the target texture for the FBO: state.setActiveTextureUnit(1); state.checkGLErrors(**Activating texture slot 1); int FFT_SIZE=256; GLuint fftaTex = 0; glGenTextures(1, fftaTex); glBindTexture(GL_TEXTURE_2D_**ARRAY_EXT, fftaTex); glTexParameteri(GL_TEXTURE_2D_**ARRAY_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D_**ARRAY_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D_**ARRAY_EXT, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D_**ARRAY_EXT, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D_**ARRAY_EXT, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16); t2darray_ext-glTexImage3D(GL_**TEXTURE_2D_ARRAY_EXT, 0, GL_RGBA16F_ARB, FFT_SIZE, FFT_SIZE, 5, 0, GL_RGBA, GL_FLOAT, NULL); fbo_ext-glGenerateMipmap(GL_**TEXTURE_2D_ARRAY_EXT); state.checkGLErrors(preparing target texture); // Initialize the FBO fbo_ext-glGenFramebuffers(1, _fftFbo); state.checkGLErrors(**Generating FBO); fbo_ext-glBindFramebuffer(GL_**FRAMEBUFFER_EXT, _fftFbo); state.checkGLErrors(Bind Framebuffer in init.); #ifdef ATTACH_TEXTURE fbo_ext-glFramebufferTexture(**GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); state.checkGLErrors(**FramebufferTexture setup); #endif GLuint fboId = state.getGraphicsContext() ? state.getGraphicsContext()-**getDefaultFboId() : 0; fbo_ext-glBindFramebuffer(GL_**FRAMEBUFFER_EXT, fboId); if(fbo_ext-**glCheckFramebufferStatus(GL_**FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT) { OSG_WARN Error while setting up Pingpong FBO.; } state.checkGLErrors(end of Framebuffer settings); glBindTexture( GL_TEXTURE_2D_ARRAY_EXT, 0 ); glPopAttrib(); _initialized = true; return true; } Adding such a drawable in my scene, i don't have any problem as long as ATTACH_TEXTURE is *undefined*. But when I define this, I still don't have any error reported by the drawable itself (all the checkGLErrors I inserted). But then getcontinous list of Warning: detected OpenGL error 'invalid operation' at after RenderBin::draw(..) messages :-( = Any idea what I'm doing wrong here ? How can I enforce the isolation between those openGL calls and what's left from the OSG scene ? after all, since this init function is called only once, there should not be any continous warning report if it didn't have a side effect outside of this drawable encapsulation... Thanks for you help !! I really feel desperated now... :'( Manu. __**_ osg-users mailing list osg-users@lists.**openscenegraph.org osg-users@lists.openscenegraph.org http://lists.openscenegraph.**org/listinfo.cgi/osg-users-**
Re: [osg-users] Setting up FBO in drawable
Hi, I see you mention fft in your code. Is this what you want to do? Do you have a working fft with multiple OSG cameras? Is it too slow for you? jp On 28/09/2011 11:30, Emmanuel Roche wrote: Thanks J.P, but actually I know the gameoflife example almost by heart already and this won't fit the bill: I need a real single pass ping pong rendering here if I want to achieve good performances. Cheers, Manu. 2011/9/28 J.P. Delport jpdelp...@csir.co.za mailto:jpdelp...@csir.co.za Hi, I can't help you with your specific drawable question, but what would you like to achieve? In the osggameoflife example there is an example of ping-pong using multiple cameras and switches. You can also swap output textures if they are exactly the same using a callback. See here for inspiration: http://code.google.com/p/__flitr/source/browse/trunk/__examples/keep_history_pass/__keep_history_pass.cpp http://code.google.com/p/flitr/source/browse/trunk/examples/keep_history_pass/keep_history_pass.cpp cheers jp On 28/09/2011 10:45, Emmanuel Roche wrote: Hi everyone, I'm trying to setup an pure OpenGL FBO with render to texture target in an OSG drawable. But I just can't figure out how to do that properly (eg. how to isolate those pure openGL calls from the rest of the OSG scene). in my drawa implementation I just have: virtual void drawImplementation(osg::__RenderInfo info) const { OSG_NOTICE Drawing PingPongDrawable...; osg::State* state = info.getState(); const unsigned int contextID = state-getContextID(); if(!_initialized !init(contextID,*state)) { OSG_WARN Failed FBO setup!; return; } state-checkGLErrors(end of PingPongDrawable drawing.); } So i'm really just calling an init function once to jus try to _create_ an FBO... I didn't even start using it..., the code of the init function is as follow: bool init(unsigned int contextID, osg::State state) const { const FBOExtensions* fbo_ext = FBOExtensions::instance(__contextID,true); const osg::Texture2DArray::__Extensions* t2darray_ext = osg::Texture2DArray::__getExtensions(contextID,true); // Push attribs to avoid collisions with existing OSG scene ? glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT | GL_TEXTURE_BIT | GL_ENABLE_BIT); state.checkGLErrors(Before PPD init.); // Prepare the target texture for the FBO: state.setActiveTextureUnit(1); state.checkGLErrors(__Activating texture slot 1); int FFT_SIZE=256; GLuint fftaTex = 0; glGenTextures(1, fftaTex); glBindTexture(GL_TEXTURE_2D___ARRAY_EXT, fftaTex); glTexParameteri(GL_TEXTURE_2D___ARRAY_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D___ARRAY_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D___ARRAY_EXT, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D___ARRAY_EXT, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D___ARRAY_EXT, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16); t2darray_ext-glTexImage3D(GL___TEXTURE_2D_ARRAY_EXT, 0, GL_RGBA16F_ARB, FFT_SIZE, FFT_SIZE, 5, 0, GL_RGBA, GL_FLOAT, NULL); fbo_ext-glGenerateMipmap(GL___TEXTURE_2D_ARRAY_EXT); state.checkGLErrors(preparing target texture); // Initialize the FBO fbo_ext-glGenFramebuffers(1, _fftFbo); state.checkGLErrors(__Generating FBO); fbo_ext-glBindFramebuffer(GL___FRAMEBUFFER_EXT, _fftFbo); state.checkGLErrors(Bind Framebuffer in init.); #ifdef ATTACH_TEXTURE fbo_ext-glFramebufferTexture(__GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); state.checkGLErrors(__FramebufferTexture setup); #endif GLuint fboId = state.getGraphicsContext() ? state.getGraphicsContext()-__getDefaultFboId() : 0; fbo_ext-glBindFramebuffer(GL___FRAMEBUFFER_EXT, fboId); if(fbo_ext-__glCheckFramebufferStatus(GL___FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT) { OSG_WARN Error while setting up Pingpong FBO.; } state.checkGLErrors(end of Framebuffer settings); glBindTexture( GL_TEXTURE_2D_ARRAY_EXT, 0 ); glPopAttrib(); _initialized = true; return true; } Adding such a drawable in my scene, i don't have any problem as long as ATTACH_TEXTURE is
Re: [osg-users] Setting up FBO in drawable
Yes J.P, as far as I understand the code I'm trying to integrate into OSG, this step will basically perform a FFT computation on the GPU. The code I'm using as template is written in pure opengl: it is the Ocean lighting implementation from Eric Brunetton ( http://evasion.inrialpes.fr/~Eric.Bruneton/). My goal is to integrate this code with osgEarth and achieve realistic ocean rendering this way. in the opengl implementation, he is basically doing this: 1. Setting up 2 texture2Darrays with 5 layers both 2. attaching those texture arrays as color buffer 0 and 1 on a FBO, 3. ping ponging the rendering between those two texture arrays with 2x8 passes in a single rendering cycle (= a single frame), this is done this way: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glUseProgram(fftx-program); glUniform1i(glGetUniformLocation(fftx-program, nLayers), choppy ? 5 : 3); for (int i = 0; i 2; ++i) { // glUniform1f(glGetUniformLocation(fftx-program, pass), float(i + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glUseProgram(ffty-program); glUniform1i(glGetUniformLocation(ffty-program, nLayers), choppy ? 5 : 3); for (int i = PASSES; i 2 * PASSES; ++i) { glUniform1f(glGetUniformLocation(ffty-program, pass), float(i - PASSES + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); = as you can see, a single FBO is used and the glDrawBuffer is called multiple times to toggle the target on the fly. The good news in this story is, I don't if you following my previous discussion with Sergey, (thread called Changing DrawBuffer for FBO) but it turns out the DrawBuffer stateAttribute I created is working just as expected in fact So this ping pong implementation is basically working, but the problem I noticed comes from another point: the GLSL program used in the process. in the openGL code, the two texture2D arrays are attached to the FBO this way: glGenFramebuffersEXT(1, fftFbo2); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glReadBuffer(GL_COLOR_ATTACHMENT0_EXT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, fftbTex, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); So, in my OSG implementation I turned that into (this is lua code): cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,0,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,0,true); But, now I think the problem comes from how those bindings are interpretated in both case: - the GLSL program performing the rendering contains a geometry shader. and this shader with emit a screen quad for each of the 5 layers in the texture2Darray, then the fragment shader just use the gl_FragColor as target and it seems that: - with the pure OpenGL code, automagically, the texture2DArray layer that is written it is the layer corresponding to the geometry layer currently emitted... - Whereas in the OSG code, the results are... well... it seems only the first layer is attached... which would make sense this the second 0 after result.ffts[x] means layer=0... Hmmm, I realize this all might not be that clear, but that's really the best I can do :-) Any way, on the whole, I think the ping pong rendering system is OK (understand here that, if I were to use single Texture2D instead of Texture2DArray attachement, everything would be just fine). But the real problem now comes from the fact I'm trying to write to multiple layers in a single Texture2DArray using a geometry shader to select the proper layer... which might just make sense in OSG for some reason ?? Any idea about all this ? The good thing is, now I could still manually create 5 quads per ping pong pass and define a uniform to select the proper source layer in the texture2DArray sampler, while also attaching all the layers one by one to the FBO (this would give me 2x5 attachments) and still using a DrawBuffer stateAttribute to select the proper texture and layer to render to... But again, this will be a big sub-optimal and using a geometry shader would probably be more efficient, no ? So is it possible ? Cheers, Manu. 2011/9/28 J.P. Delport jpdelp...@csir.co.za Hi, I see you mention fft in your code. Is this what you want to do? Do you have a working fft
Re: [osg-users] Setting up FBO in drawable
Hi ! I performed some additional tests on this issue and now I think I'm reaching the bottom of it: when I attach my Texture2DArray as: cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,0,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,0,true); And then force the GLSL program to only handle and write the computed values for layer 0, the final result is perfect (on layer 0 of the Texture2DArray). Now, if I switch the indexes to handle the layer 1 with: cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,1,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,1,true); And then force the GLSL program to write only that layer 1 data, then may layer 1 on the second Texture2DArray is completely black and the layer 1 on the first Texture2DArray seems unchanged (that first Texture2DArray is initialized first with non zero data. = So it seems nothing in written on my layers 1 on that second case. No idea what could be wrong with those Texture2DArray targets ? Cheers, Manu. 2011/9/28 Emmanuel Roche roche.emman...@gmail.com Yes J.P, as far as I understand the code I'm trying to integrate into OSG, this step will basically perform a FFT computation on the GPU. The code I'm using as template is written in pure opengl: it is the Ocean lighting implementation from Eric Brunetton ( http://evasion.inrialpes.fr/~Eric.Bruneton/). My goal is to integrate this code with osgEarth and achieve realistic ocean rendering this way. in the opengl implementation, he is basically doing this: 1. Setting up 2 texture2Darrays with 5 layers both 2. attaching those texture arrays as color buffer 0 and 1 on a FBO, 3. ping ponging the rendering between those two texture arrays with 2x8 passes in a single rendering cycle (= a single frame), this is done this way: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glUseProgram(fftx-program); glUniform1i(glGetUniformLocation(fftx-program, nLayers), choppy ? 5 : 3); for (int i = 0; i 2; ++i) { // glUniform1f(glGetUniformLocation(fftx-program, pass), float(i + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glUseProgram(ffty-program); glUniform1i(glGetUniformLocation(ffty-program, nLayers), choppy ? 5 : 3); for (int i = PASSES; i 2 * PASSES; ++i) { glUniform1f(glGetUniformLocation(ffty-program, pass), float(i - PASSES + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); = as you can see, a single FBO is used and the glDrawBuffer is called multiple times to toggle the target on the fly. The good news in this story is, I don't if you following my previous discussion with Sergey, (thread called Changing DrawBuffer for FBO) but it turns out the DrawBuffer stateAttribute I created is working just as expected in fact So this ping pong implementation is basically working, but the problem I noticed comes from another point: the GLSL program used in the process. in the openGL code, the two texture2D arrays are attached to the FBO this way: glGenFramebuffersEXT(1, fftFbo2); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glReadBuffer(GL_COLOR_ATTACHMENT0_EXT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, fftbTex, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); So, in my OSG implementation I turned that into (this is lua code): cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,0,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,0,true); But, now I think the problem comes from how those bindings are interpretated in both case: - the GLSL program performing the rendering contains a geometry shader. and this shader with emit a screen quad for each of the 5 layers in the texture2Darray, then the fragment shader just use the gl_FragColor as target and it seems that: - with the pure OpenGL code, automagically, the texture2DArray layer that is written it is the layer corresponding to the geometry layer currently emitted... - Whereas in the OSG code, the results are... well... it seems only the first layer is attached... which would make sense this the second 0 after
Re: [osg-users] Setting up FBO in drawable
hmmm and God created: *osg::Camera::FACE_CONTROLLED_**BY_GEOMETRY_SHADER* and *setImplicitBufferAttachmentMask(osg.Camera.ImplicitBufferAttachment.IMPLICIT_COLOR_BUFFER_ATTACHMENT, osg.Camera.ImplicitBufferAttachment.IMPLICIT_COLOR_BUFFER_ATTACHMENT);* It now works perfectly !! That's the most beautiful day in my life ! (well... almost :-) ) Thanks for your support guys! Now I can finally proceed with this task. Cheers, Manu. 2011/9/28 Emmanuel Roche roche.emman...@gmail.com Hi ! I performed some additional tests on this issue and now I think I'm reaching the bottom of it: when I attach my Texture2DArray as: cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,0,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,0,true); And then force the GLSL program to only handle and write the computed values for layer 0, the final result is perfect (on layer 0 of the Texture2DArray). Now, if I switch the indexes to handle the layer 1 with: cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,1,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,1,true); And then force the GLSL program to write only that layer 1 data, then may layer 1 on the second Texture2DArray is completely black and the layer 1 on the first Texture2DArray seems unchanged (that first Texture2DArray is initialized first with non zero data. = So it seems nothing in written on my layers 1 on that second case. No idea what could be wrong with those Texture2DArray targets ? Cheers, Manu. 2011/9/28 Emmanuel Roche roche.emman...@gmail.com Yes J.P, as far as I understand the code I'm trying to integrate into OSG, this step will basically perform a FFT computation on the GPU. The code I'm using as template is written in pure opengl: it is the Ocean lighting implementation from Eric Brunetton ( http://evasion.inrialpes.fr/~Eric.Bruneton/). My goal is to integrate this code with osgEarth and achieve realistic ocean rendering this way. in the opengl implementation, he is basically doing this: 1. Setting up 2 texture2Darrays with 5 layers both 2. attaching those texture arrays as color buffer 0 and 1 on a FBO, 3. ping ponging the rendering between those two texture arrays with 2x8 passes in a single rendering cycle (= a single frame), this is done this way: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glUseProgram(fftx-program); glUniform1i(glGetUniformLocation(fftx-program, nLayers), choppy ? 5 : 3); for (int i = 0; i 2; ++i) { // glUniform1f(glGetUniformLocation(fftx-program, pass), float(i + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(fftx-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glUseProgram(ffty-program); glUniform1i(glGetUniformLocation(ffty-program, nLayers), choppy ? 5 : 3); for (int i = PASSES; i 2 * PASSES; ++i) { glUniform1f(glGetUniformLocation(ffty-program, pass), float(i - PASSES + 0.5) / PASSES); if (i%2 == 0) { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_A_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT1_EXT); } else { glUniform1i(glGetUniformLocation(ffty-program, imgSampler), FFT_B_UNIT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); } drawQuad(); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); = as you can see, a single FBO is used and the glDrawBuffer is called multiple times to toggle the target on the fly. The good news in this story is, I don't if you following my previous discussion with Sergey, (thread called Changing DrawBuffer for FBO) but it turns out the DrawBuffer stateAttribute I created is working just as expected in fact So this ping pong implementation is basically working, but the problem I noticed comes from another point: the GLSL program used in the process. in the openGL code, the two texture2D arrays are attached to the FBO this way: glGenFramebuffersEXT(1, fftFbo2); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fftFbo2); glReadBuffer(GL_COLOR_ATTACHMENT0_EXT); glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, fftaTex, 0); glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, fftbTex, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); So, in my OSG implementation I turned that into (this is lua code): cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER0,result.ffts[1],0,0,true); cam:attach(osg.Camera.BufferComponent.COLOR_BUFFER1,result.ffts[0],0,0,true); But, now I think the problem comes from how those bindings are interpretated in both case: - the GLSL program performing the