Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Art, That was quick! It's working now. Thanks a lot, this just made my life much easier. :) I tried it with the test example and my actual application, both are now working as they should. It didn't seem to break anything for me, but I'll report if anything comes up. - Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16127#16127 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Art, Art Tevs wrote: Hi Miika, J.P., ok, after hours of thinking and studying of OSG's source code, I have to admit, that this would not be an easy task to solve that issue. The problem is that, even if I am able to restore any FBO attribute, FBOs used by cameras will break the whole thing down :( Imagine this simple scene graph: mainCamera->FBO 1 | +---slaveCamera->FBO 2 -- scene -- Node->FBO 3 | scene So whenever now slaveCamera is activated, the corresponding FBO will be bound. However, the problem is that this FBO is not handled like an Attribute, so that it can be pushed and popped, no it is a hard coded call to enable FBO. Now, whenever I have a node, which uses another FBO, this will cause the traverser to pop the FBO 3 after rendering of Node, and hence even pop the FBO out of the mainCamera. So in order to get rid of circumstances to change OSG code, I decided to patch osgPPU to handle FBOs manually. Hence, now whenever a Unit is rendered, it first pushes current FBO on stack and then activates its own. After rendering is done, it does activates the previous FBO back. Now, Miika, your examples works like a charme. Yeah, I still remember that very first version of osgPPU handled FBOs on its own. However I droped that handling, because there were issues with multithreading. It seems that now, there are no such issue sanymore. So, take a look into current osgPPU svn trunk. Please test it and let me know, if you experience other issues with it. J.P. could you also test new osgPPU in your application. The changes are not trivial and hence may even introduce new bugs (I would expect them in handling of multiple rendering targets). Thanks for the update Art, I will let you know if there are any issues. jp Cheers, Art -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16096#16096 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Miika, J.P., ok, after hours of thinking and studying of OSG's source code, I have to admit, that this would not be an easy task to solve that issue. The problem is that, even if I am able to restore any FBO attribute, FBOs used by cameras will break the whole thing down :( Imagine this simple scene graph: mainCamera->FBO 1 | +---slaveCamera->FBO 2 -- scene -- Node->FBO 3 | scene So whenever now slaveCamera is activated, the corresponding FBO will be bound. However, the problem is that this FBO is not handled like an Attribute, so that it can be pushed and popped, no it is a hard coded call to enable FBO. Now, whenever I have a node, which uses another FBO, this will cause the traverser to pop the FBO 3 after rendering of Node, and hence even pop the FBO out of the mainCamera. So in order to get rid of circumstances to change OSG code, I decided to patch osgPPU to handle FBOs manually. Hence, now whenever a Unit is rendered, it first pushes current FBO on stack and then activates its own. After rendering is done, it does activates the previous FBO back. Now, Miika, your examples works like a charme. Yeah, I still remember that very first version of osgPPU handled FBOs on its own. However I droped that handling, because there were issues with multithreading. It seems that now, there are no such issue sanymore. So, take a look into current osgPPU svn trunk. Please test it and let me know, if you experience other issues with it. J.P. could you also test new osgPPU in your application. The changes are not trivial and hence may even introduce new bugs (I would expect them in handling of multiple rendering targets). Cheers, Art -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16096#16096 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
J.P. Delport wrote: > > sorry, I have not had time to run your test app, just thinking out loud... > > what happens if stage 3 is again a pre-render RTT camera with the > processor as a child and you then use the viewer camera only to display > the final output quad? Have you tried commenting out the osgPPU setting > of bin numbers? > I tried this and some other similar trickery but, again, just when it seems to be almost working it adds that FBO 0 call. Doing it this way just moves the problem away from the main camera, to the newly added second RTT camera which now tries to draw directly to the screen. Looking forward for that patch Art, hopefully you can work something out of it. :) That certainly sounds like it would be more correct behavior for OSG in general too. Thanks, Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16073#16073 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Art, Art Tevs wrote: I am decided to work out some patch for osg to support this. OSG is capable of pushing and popping of texture attributes while traversing the graph. The same thing must also happen for the FBOs. Of course, some elegant solution which covers almost anything is better, however I htink only pushing and popping the FBOs is already what everybody needs ;) just a note. Wojtek has also been working on some updates to the FBO code/management, see the thread on users and the submission: Re: [osg-submissions] [osg-users] FBOs without color or depth attachments /DrawBuffer/ ReadBuffer I am not sure if your updates are complementary or totally orthogonal. regards jp -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Miika, miika wrote: > Alright, thanks. At least I know what it's about now, it was driving me crazy > because I had no idea whether it was because of me or osgPPU or OSG itself. I > guess I'll have to try and hack together something that sort of circumvents > the more elegant structures of OSG to handle this for the time being. Or > perhaps just take a more deferred route, so that the actual shading itself is > more like a post process too. Drop a line in this thread or somewhere if > there's some progress with this issue someday. > I am decided to work out some patch for osg to support this. OSG is capable of pushing and popping of texture attributes while traversing the graph. The same thing must also happen for the FBOs. Of course, some elegant solution which covers almost anything is better, however I htink only pushing and popping the FBOs is already what everybody needs ;) > > Also while I'm at it, something unrelated... Was there any clean way to > enable trilinear filtering in osgPPU? It's pretty useful for quick and dirty > variable sized blurring with mipmaps in some post processing things. osgPPU > seems to explicitly disable it in UnitInMipmapOut::enableMipmapGeneration(), > I got around that by modifying one of the values there to > LINEAR_MIPMAP_LINEAR but that't not a very nice solution of course. Could > there be an option to set that, for example? > Currently there is no such direct way. You could do this by changing the texture parameters of output textures (or input) of ppus. So not changing the default values in osgPPU, but just changing the texture settings later. Other way would be that you add this functionality to osgPPU and I would include it into the repository ;) Cheers, art -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16069#16069 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi, Miika Aittala wrote: Ok, here's a little example (code is attached). Sorry if it's a bit contrived, I couldn't think of anything simple but illustrative. Basically it renders a cylinder with the following stages: 1) Render the cylinder to a texture from slaveCamera 2) Process the texture from stage 1 with firstProcessor, inverting the colors 3) Render the cylinder to a texture from viewer->camera, this time using the result of stage 2 as a texture 4) Process the texture from stage 3 with secondProcessor, which makes the image look wavy The problem is that instead of this ordering, we get 1-3-2-4, which means that the pre-render result is always from the previous frame. If you compile and run the program, you'll notice that this causes a green fringe at the edges of the cylinder when you make it move (because the background color of the texture is green). Here's an image of this whole thing: http://a.imagehost.org/view/0333/osgppukuva To fix this problem, we can try to add firstProcessor as a child of slaveCamera instead of root, as explained in my previous posts. However, this somehow breaks stage 3, which refuses to render to a texture anymore. sorry, I have not had time to run your test app, just thinking out loud... what happens if stage 3 is again a pre-render RTT camera with the processor as a child and you then use the viewer camera only to display the final output quad? Have you tried commenting out the osgPPU setting of bin numbers? jp -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Alright, thanks. At least I know what it's about now, it was driving me crazy because I had no idea whether it was because of me or osgPPU or OSG itself. I guess I'll have to try and hack together something that sort of circumvents the more elegant structures of OSG to handle this for the time being. Or perhaps just take a more deferred route, so that the actual shading itself is more like a post process too. Drop a line in this thread or somewhere if there's some progress with this issue someday. Cool technique and video by the way, that and the others. :) Also while I'm at it, something unrelated... Was there any clean way to enable trilinear filtering in osgPPU? It's pretty useful for quick and dirty variable sized blurring with mipmaps in some post processing things. osgPPU seems to explicitly disable it in UnitInMipmapOut::enableMipmapGeneration(), I got around that by modifying one of the values there to LINEAR_MIPMAP_LINEAR but that't not a very nice solution of course. Could there be an option to set that, for example? - Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16065#16065 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Miika, OK, i was able to run this. Yes, I can see the issue you are speaking about. And I think, I understand where the problem is... > > To fix this problem, we can try to add firstProcessor as a child of > slaveCamera instead of root, as explained in my previous posts. However, this > somehow breaks stage 3, which refuses to render to a texture anymore. > This is also a correct step. If you do so, then you get the correct rendering order of PPUs. So the slave camera is rendered first and the master camera afterwards. However, as you already pointed out, the frame buffer of the main camera is not bound anymore. I think the problem is, that osgppu uses FBOs without cameras and hence forces osg to reset its states, incl. FBOs back, after rendering of PPUs. Now, using the scene graph you have in your example application we would get this structure: mainCamera->FBO | root | +--- scene | +--- slaveCamera->FBO -- 1st Processor --- scene | +--- 2nd Processor Although the traverser is rendering the objects in correct order, but it also always restores the previous states of the parent after the rendering a subgraph. For example if you render objects in the main camera and one of this objects uses FBO in its stateset, then after rendering of this object, the FBO will be restored back. However, currently in OSG there is no restore mechanism to restore to the correct FBO. So it just restores to FBO 0. I already claimed about that problem around year or two ago, because I also wasn't able to solve that issue. I have discusse dabout that issue already on hte mailing list, however this was just about DrawBuffers. I do not remember how that discussions ended, so maybe we ended up either in "not needed feature" or "do not know how to patch OSG to support this". The problem is, that one assumes that FBOs are only used for Cameras, therefor there is only a special treatment for that case. The problem would arrive with any other type of graph, where a node set an FBO to something. Because the RenderStage traces which state changes there were done, it will also trace that FBO was changed. Hence it will change FBO to its global default, when the corresponding node is not rendered anymore. So in order to solve that issue, one has to patch OSG to support to restore FBO to its previous state and not just to something default. However this seems to be harder than its sounds. In deed, as I have stated already previously on the mailing list: Restoring to something "baseline specific" (default value) of an StateAttribute is not a right way to do it. In most cases this works well, however in complex cases it breaks down. Maybe I can try to write a patch to oslve it, however this require big changes of current scene graph implementation, so that it may introduce just much more bugs. cheers, art P.S. > > Anyway, this is my problem of course and I'm not demanding that anyone fix it > for me, but I think that having the ability to do this would be very > benefical for osgPPU in general. As I mentioned before, osgPPU would be > perfectly suited for things such as processing shadow maps or any number of > other advanced algorithms, aside for this one little detail. > In deed, I have used PPU not only for post processing, but also for inbetween processing. Take a look into this video: http://www.tevs.eu/project_sigasia08.html from 3:30 you see a TV screen with some video running on it. This video texture was processed by osgPPU to look little bit blueish like in real TVs. However none of my porjects realy depends on frame coherency, hence I haven't took a lot of effort to correct this issue :( -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16044#16044 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Ok, here's a little example (code is attached). Sorry if it's a bit contrived, I couldn't think of anything simple but illustrative. Basically it renders a cylinder with the following stages: 1) Render the cylinder to a texture from slaveCamera 2) Process the texture from stage 1 with firstProcessor, inverting the colors 3) Render the cylinder to a texture from viewer->camera, this time using the result of stage 2 as a texture 4) Process the texture from stage 3 with secondProcessor, which makes the image look wavy The problem is that instead of this ordering, we get 1-3-2-4, which means that the pre-render result is always from the previous frame. If you compile and run the program, you'll notice that this causes a green fringe at the edges of the cylinder when you make it move (because the background color of the texture is green). Here's an image of this whole thing: http://a.imagehost.org/view/0333/osgppukuva To fix this problem, we can try to add firstProcessor as a child of slaveCamera instead of root, as explained in my previous posts. However, this somehow breaks stage 3, which refuses to render to a texture anymore. To see the original problem, run the program and move the object with the mouse. This should give the ugly green borders. Comment out the first line (#define) of the code to see how it breaks when we attempt to alter the rendering order. Anyway, this is my problem of course and I'm not demanding that anyone fix it for me, but I think that having the ability to do this would be very benefical for osgPPU in general. As I mentioned before, osgPPU would be perfectly suited for things such as processing shadow maps or any number of other advanced algorithms, aside for this one little detail. Thanks, Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=16027#16027 #define ATTACH_PROCESSOR_TO_ROOT// If this is defined, do the typical attachment to scene root - comment out to attempt to fix the render order (which goes wrong) #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include int main() { // Set up the viewer osgViewer::Viewer viewer; unsigned int screenWidth; unsigned int screenHeight; osg::GraphicsContext::getWindowingSystemInterface()->getScreenResolution(osg::GraphicsContext::ScreenIdentifier(0), screenWidth, screenHeight); unsigned int windowWidth = 640; unsigned int windowHeight = 480; viewer.setUpViewInWindow((screenWidth-windowWidth)/2, (screenHeight-windowHeight)/2, windowWidth, windowHeight); viewer.setThreadingModel(osgViewer::Viewer::SingleThreaded); // Add root node with two groups: rttScene will be rendered by slaveCamera and processed, mainScene will be rendered by osgViewer's camera osg::Group* root = new osg::Group(); osg::Group* rttScene = new osg::Group(); osg::Group* mainScene = new osg::Group(); // Add something to see, same to both scenes osg::Geode* geode = new osg::Geode(); geode->addDrawable(new osg::ShapeDrawable(new osg::Cylinder(osg::Vec3(4.4f,0.0f,0.0f),1.0f,1.4f))); rttScene->addChild(geode); mainScene->addChild(geode); root->addChild(mainScene); // Create the texture to render to... osg::Texture2D* slaveCameraTexture = new osg::Texture2D; { slaveCameraTexture->setTextureSize(640, 480); slaveCameraTexture->setInternalFormat(GL_RGBA); slaveCameraTexture->setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::LINEAR); slaveCameraTexture->setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::LINEAR); } // and the RTT camera osg::Camera* slaveCamera = new osg::Camera(); { slaveCamera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); slaveCamera->setClearColor(osg::Vec4(1,0,1,1)); slaveCamera->setViewport(new osg::Viewport(0,0,640,480)); slaveCamera->setReferenceFrame(osg::Transform::RELATIVE_RF); slaveCamera->setRenderOrder(osg::Camera::PRE_RENDER); slaveCamera->attach(osg::Camera::COLOR_BUFFER0, slaveCameraTexture); slaveCamera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT); root->addChild(slaveCamera); slaveCamera->addChild(rttScene); } // Set up a simple osgPPU processor which inverts the input image's colors osg::Texture* rttTexture; osgPPU::Processor *firstProcessor = new osgPPU::Processor; { firstProcessor->setCamera(slaveCamera); osgPPU::UnitCameraAttachmentBypass* unitCam = new osgPPU::UnitCameraAttachmentBypass();
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Art and J.P., thanks for the quick replies. :) art wrote: > Is it possible that you provide me with a simple test case implementation to > test it. Because then maybe I would be able to find an answer how to correct > it. > I'll probably put some simple test case together at the beginning of next week, that should help me isolate it too. I'll get back to you with that, unless I've managed to fix the problem otherwise by then. Who knows, maybe the problem is somehow specific to the bigger program I'm working on now. > > I think osgViewer's camera is binded to FBO 0, because this is a main camera > which suppose to render to the screen. > I've set it to render to a separate texture (viewer->getCamera()->attach() etc.) so it should do it, and normally it does it too. It's only when I try to alter the rendering order that it starts making that command for no apparent reason. It actually binds to the correct FBO just before it makes the call to FBO 0, so the call doesn't come from the regular source. It almost makes me think that OSG is somehow handling this incorrectly, though probably I'm just trying to feed it with a badly constructed graph. > You could try to add a second camera below the osgviewer's, which renders > your scene into FBO. Then use the output of this camera for further > processing. So the graph could look like this: > > osgViewer > | > +--- depthCamera (with PRE_RENDER, to an FBO) --- (stuff) > | > +--- colorCamera (with PRE_RENDER, to an FBO) --- (stuff) > | > +--- PPU-Graph > | > +--- (same stuff) > > > This will cause your rendering to be performed into FBOs and processed by > osgPPU. The ppu graph will have just no output to the screen, so no UnitOut > at the end. It will just process your renderings and the output textures > should be then used in the rendering of the main scene. > > The ppu graph will look similar to this: > >Processor > | | > UnitCamera UnitCamera > \ / >UnitInOut > > > I am not exactly sure, but here it still can happen that osgPPU will be > executed after your main scene is rendered, so that you get this one frame > delay. > I'll probably end up trying things like that if I can't come up with a clean solution. I'm kind of hesitant to make the osgViewer camera into just some "post-hack" stage, because I guess it's in principle supposed to be somehow special. Also the problem seems to be quite persistent no matter how I try to arrange everything. But I'll look into that. > > If you want to change the render bin number of processor, you do not have to > put it under a group and change the group's bin number. you can try to do it > directly by getting processor's stateset and change the rende rbin number > there (processor is derived from Group). > This is just for convenience - I think Processor::init() sets the bin number to 100, and it's run at the first traversal so it would overwrite my value and I'd have to re-overwrite it later. Perhaps there could be a function to specify the default bin number? J.P. Delport wrote: > > I'm not sure if this is different from what you tried with the group, > but have you tried adding the processor as a sibling of the depthcamera, > i.e. not a child of depthcam, but child of viewer? We have some > algorithms with multiple passes where we just add a bunch of prerender > cameras to a group and they execute in the order they were added (no > need to fiddle with bins). I know this works with RTT cameras and FBOs > (see e.g. osgstereomatch example), not sure of osgPPU interaction though. > This is pretty much the usual way to use osgPPU, so it should result in a typical last-stage processing. I think the "problem" is that osgPPU works basically by adding a bunch of screen-sized quad Geodes with a large renderbin number to the scene, whichever camera it is under (if I've understood it correctly). So in this case it will be under the osgViewer's camera and it'll render last. With that -1 group thing I tried to change the ordering such that the ppu-Geodes will render first thing in the main pass, but for some reason it causes that mystical FBO 0 problem. Anyway, I'll probably get back at Monday and I'll try to put together some example code then. - Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=15951#15951 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi, Miika Aittala wrote: Hi, ... where do I put the Processor and what else do I need to do in order to make its units render AFTER the depthCamera but BEFORE the osgViewer's camera? The main camera needs to render to an FBO too, because its results are also further processed with another Processor (could multiple processors be a problem, by the way)? > Also another approach of adding a group with renderbin -1 to the main > level and parenting the processor to this gives pretty much the same > results and problems. I'm not sure if this is different from what you tried with the group, but have you tried adding the processor as a sibling of the depthcamera, i.e. not a child of depthcam, but child of viewer? We have some algorithms with multiple passes where we just add a bunch of prerender cameras to a group and they execute in the order they were added (no need to fiddle with bins). I know this works with RTT cameras and FBOs (see e.g. osgstereomatch example), not sure of osgPPU interaction though. If it still does not work, maybe you could try make a minimal example of osg RTT camera and osgPPU processor that shows the problem. jp So... any ideas? Thanks, Miika -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Running the processor between pre-render and main render
Hi Mika, Is it possible that you provide me with a simple test case implementation to test it. Because then maybe I would be able to find an answer how to correct it. > > However, this causes another problem which I don't even begin to understand > (which may or may not be related to osgPPU). Everything would be perfect, > except for some reason now there's an extra call for > glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0) before the main osgViewer camera > renders (apparently after already having earlier bound the correct FBO). This > makes it draw directly to the screen, which is wrong. It does have its FBO > attachments and textures and they're even cleared correctly before rendering, > but for some unexplained reason it makes this binding that ruins everything. > I've spent hours stepping around in OSG's sources and while I can see where > this call is made, I have no idea why. It's in FrameBufferObject.cpp, > apply(State, BindTarget), this part: > > if (_attachments.empty()) > { > ext->glBindFramebufferEXT(target, 0); > return; > } > > (called via osg::State::applyGlobalDefaultAttribute(), the buffer attachment > list for osgViewer's camera isn't empty - is it applying some default state > settings when they're not wanted?) > I think osgViewer's camera is binded to FBO 0, because this is a main camera which suppose to render to the screen. You could try to add a second camera below the osgviewer's, which renders your scene into FBO. Then use the output of this camera for further processing. So the graph could look like this: osgViewer | +--- depthCamera (with PRE_RENDER, to an FBO) --- (stuff) | +--- colorCamera (with PRE_RENDER, to an FBO) --- (stuff) | +--- PPU-Graph | +--- (same stuff) This will cause your rendering to be performed into FBOs and processed by osgPPU. The ppu graph will have just no output to the screen, so no UnitOut at the end. It will just process your renderings and the output textures should be then used in the rendering of the main scene. The ppu graph will look similar to this: Processor | | UnitCamera UnitCamera \ / UnitInOut I am not exactly sure, but here it still can happen that osgPPU will be executed after your main scene is rendered, so that you get this one frame delay. > > Also another approach of adding a group with renderbin -1 to the main level > and parenting the processor to this gives pretty much the same results and > problems. > If you want to change the render bin number of processor, you do not have to put it under a group and change the group's bin number. you can try to do it directly by getting processor's stateset and change the rende rbin number there (processor is derived from Group). cheers, art -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=15947#15947 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] [osgPPU] Running the processor between pre-render and main render
Hi, I'm implementing a more advanced SSAO algorithm using osgPPU. I need to use the results more carefully than just multiplying them on top, so the resulting texture is used as an input to a regular surface fragment shader during the main rendering pass. Therefore the processing needs to happen exactly after the depth pre-render pass but before the main rendering pass. At the moment, the algorithm works nicely, except it runs after the main render. This means that the results are only available at next frame, and the lighting lags one frame behind the rest of the scene. This of course causes unacceptable artefacts if anything is moving. There are plenty of other scenarios where doing this would also be necessary - e.g. processing reflections or processing shadow maps before use in some algorithms such as VSM or CSM. I've looked through all the examples and forum posts, but everyone seems to be happy with processing as the very last stage. Perhaps there's some really obvious simple way to do it, but I just can't make it happen. So basically, if I have a scene graph roughly like follows: osgViewer (renders to an FBO) | +--- depthCamera (with PRE_RENDER, to an FBO) --- (stuff) | +--- (same stuff) ... where do I put the Processor and what else do I need to do in order to make its units render AFTER the depthCamera but BEFORE the osgViewer's camera? The main camera needs to render to an FBO too, because its results are also further processed with another Processor (could multiple processors be a problem, by the way)? My current attempts are as follows (disregard this if there's some nice way to do it instead). I've tried simply adding the SSAO-processor as depthCamera's child. I'm not very familiar with all the renderbin/stage stuff, but the idea is that it would be included in the pre-render pass and thus rendered before the main pass begins. This actually almost works, though possibly by accident. Looking at glIntercept, the processor does indeed run before the main camera and does what is's supposed to. However, this causes another problem which I don't even begin to understand (which may or may not be related to osgPPU). Everything would be perfect, except for some reason now there's an extra call for glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0) before the main osgViewer camera renders (apparently after already having earlier bound the correct FBO). This makes it draw directly to the screen, which is wrong. It does have its FBO attachments and textures and they're even cleared correctly before rendering, but for some unexplained reason it makes this binding that ruins everything. I've spent hours stepping around in OSG's sources and while I can see where this call is made, I have no idea why. It's in FrameBufferObject.cpp, apply(State, BindTarget), this part: if (_attachments.empty()) { ext->glBindFramebufferEXT(target, 0); return; } (called via osg::State::applyGlobalDefaultAttribute(), the buffer attachment list for osgViewer's camera isn't empty - is it applying some default state settings when they're not wanted?) Also another approach of adding a group with renderbin -1 to the main level and parenting the processor to this gives pretty much the same results and problems. So... any ideas? Thanks, Miika -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=15945#15945 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org