Re: [osg-users] StateSetSwitchNode suggestion.
Hi Robert, Thanks for the explanation. I don't think your needs are particular different from most who are writing games/simulators with the OSG. While large scenes LOD's is crucial, and for massive scenes Paging is also crucial. The OSG supports both so should scale to larger scenes. Paging would require that all custom nodes we create is part of the IVE file format. Inserting the custom nodes that we need is done as a pre-processing step which result in an .ive file. Today we have to insert our custom nodes post load. It it therefore nice if we can extend the IVE format. You told me in an earlier mail that you are thinking about making the IVE format extendable. Doing that would be a great thing at least for us. It would make it much easier to implement and use custom nodes. Is there currently (OSG 2.6) any way to extend the IVE format without doing the implementation directly in OSG? (We want to avoid that in order to make future OSG upgrades as easy to do as possible.) The other approach you could take is having custom nodes or drawables that wrap up a the whole rendering of a class of object, or a how collection of objects. The later is something would suit trees/forests. One of my future tasks is to combine large ammount of very simple trees into one drawable. Our tree rendering does not require alpha-blending and thus not depth sorting either, so we will most certainly gain a lot on both cull and draw by combining this kind of geometry. These approaches would all reduce the number of objects in your scene graph in memory and in the view frustum on each frame and thereby cut the update, cull and draw dispatch costs. If your switch node helps in keeping the number of objects down then thumbs up for it. I chosed the StateSetSwitchNode approach and not the LayerNode approach because of the following reasons: - It is much faster on initialization. (But, I am going to figure out why, and maybe then we wont need it after all.) - It is robust against node-visitors that has too many node-mask-bits turned on. - It is easier to work with. W.r.t init time, typically this is pretty low. Shaders can be expensive. Texture objects and drawables aren't typically too expensive - the osgmemorytest example illustrate this. My understanding on shaders is that they are compiled at the time they are needed to be used (runtime). If that is correct then I doubt that the shaders would take a lot of time during OSG's initialization process. I only see this initialization problem when working with the compositeViewer. Other viewers are fast on init. Viggo _ Få Windows Live Messenger på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] StateSetSwitchNode suggestion.
Hi Robert, The most general approach is the best, and the performance differences will almost certainly be negligible between your suggest SwitchStateSetNode and my suggest Layer node with nodemask. Feel free to implement both and test them against each other w.r.t cull traversal time and overall framerate. I implemented the LayerNode approach and tested both versions using a composite viewer. The StateSetSwitchNode is a tiny bit faster as predicted, but I agree with you that the difference is negligible. The other penalty that I worte about earlier is however large. My scene:==- My scene contains 40308 nodes. - A full traverse results in visiting 380941 nodes (lot of reused geometry). Pre-processing the scene with LayerNodes:==- 2356 LayerNodes are inserted. - Each LayerNode is one Group node with 5 child Group nodes. - Each child node share the same children as the other child nodes.- I apply optimizations from osgUtil::Optimizer.- Result: - 5 nodes totally in the scene. - 11446441 visits during a full traverse. Pre-processing the scene with StateSetSwitchNode:==- 2356 StateSetSwitchNodes are inserted. - This is one Group node that has 5 Group nodes embedded. - All 5 embedded nodes share the same children as the StateSetSwitchNode. - None of the 5 embedded nodes has any parents.- I use the same optimizations as above.- Result: - 42664 nodes totally in the scene. - 540844 visits during a full traverse. The penalty:=Our viewer is a osgViewer::CompositeViewer.Calling the run function of the osgViewer::CompositeViewer is quite dramatically different from the LayerNode to the StateSetSwitchNode approach.The LayerNode approach use approximately 20 times longer time from we call run until the scene appears on screen.11446441 divided by 540844 = 21. Conclusion after this testing:===- We do not need the StateSetSwitchNode if the viewer initialization penalty is somehow fixed. New questions:===- What does the osgViewer::CompositeViewer do when we call run and until render starts?- Is this something that would be possible to do in a pre-processor and save in a .ive file?- Other ways to get around this problem? Regards,Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] StateSetSwitchNode suggestion.
Hi Robert, I really can't comment without the actual problem and solutions in front of me, it's just too complex a topic to accumulate a whole thread and then generate in my head what the possible code and scene graph you have going on at your side. One thing I can say is that the number of nodes you are playing with looks pretty excessive. I have no clue what type of application you have and what you are trying to do, but there may well may a wholly better way of tackling your problem than throwing a massive scene graph at it. Ok, I'll explain what we are doing :-) We are creating a simulator for vehicles, people and combat. A typpical world that we display is 15000x15000 meters. The terrain is populated with cities, roads, light-poles at the side of the road, lots of forests, lots of stones laying around in the terrain and other debree. The example I tested with is actually quite simple compared to what we will need to use. We are going to show a realistic sea that use GPU side vertex displacement from a real-time generated height-map for waves. The water surface will reflect the world and refract the sub-sea terrain. We are going to add shadows and volumetric fog. The combat scenes will use an extensive ammount of particles. The complexity level is high both on what we demand of our shaders and for the detail of items in the world. We are also considering implementing deferred lighting. Our simulators will be possible to configure to run low detail if the customer chose to use a graphics card that is not too powerful, and high detail if the customer wants to spend money on high end hardware. The complexity of this demands a good system for shaders. We must LOD the geometry in the world and also LOD the shaders we apply onto them. Rendering one frame image of our scene requires multiple pre-rendering cameras and some times multiple post-rendering cameras. Our world is so large that we will re-use the node-tree for all the cameras and use the node masks to determine what to render. We test performance on small scenes and on large scenes. The size of the scene used in an example does not neccessarily matter when you are discussing a specific technology/problem. I tested both approches on one palm-tree instead of the whole world. It too has a similar initialization time problem, but it is so small that it is hard to measure it properly. Other computer activities will disturb too much (we are talking Windows, and Windows does not handle multitasking in the best possible way) :-) So, in this case, throwing a large scene at the problem make a lot of sense, the numbers stand more out this way. I am so far impressed by the speed and useability that OSG provide. OSG is a very good engine. It is great to work with it. I am going to run a profiler on the initialization problem. That is probably a great way of figuring out where the time goes. Perhaps also do a single-step journey into the run function to see what it actually do. Have a nice weekend, Viggo _ Mest sett denne måneden på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] StateSetSwitchNode suggestion.
Hi Robert, By default I would expect it to traverse all children. One variation I considered was to have a NodeMask + StateSet per Child, with this combination you'd be able to select different combinations for different traversal masks. If I set up the LayerNode to have one child for each StateSet I want to switch between, and give each of those children their own node mask. Then I can set up the cull-mask of my cameras so that they chose the correct StateSet. This will work fine. The LayerNode will be more flexible than the StateSetSwitchNode, but I think it will be a bit slower. Let's examine traverse performance of the LayerNode and the StateSetSwitchNode.In this example we assume that we want to switch between 5 different state-sets.I am now only looking at the usage method that the StateSetSwitchNode supports. LayerNode:Cull-traverse will first run the accept function of the LayerNode. It will then call traverse and call accept for each of the 5 children of the LayerNode. One of the 5 children will pass the mask check and call its traverse function. LayerNode will be slower the more StateSet's you want to switch between. Any traverse with node mask 0x will traverse each of the 5 children, and their children In my case, they share the same children, which can be a whole world. StateSetSwitchNode:=Cull-traverse will call the accept function of the StateSetSwitchNode. This function is modified and has one extra bitvise and operation, and one extra array lookup. It then call the traverse function of the node-pointer it read from the array.That sounds like almost 5 times faster than the LayerNode. StateSetSwitchNode will not run slower if we add more StateSet's to switch between. Any traverse with node mask 0x will traverse only one of the nodes in the array and that node's children. All nodes in the array share the same children, and in this solution they get traversed one time instead of 5. I feel that I need to state that the StateSetSwitchNode comes with a rule-set: - Only one of the state-sets shall be traversed during any traversal. - They shall all point to the same sub-tree (always).These rules are neccessary in order to make the cull-traverse as fast as possible. My conclusion after this e-mail is still that I want to implement the StateSetSwitchNode as an addition to the SwitchNode and a future version of the LayerNode.It would be very beneficial for me to integrate it to OSG and support it in the .ive and .osg file formats.I would be happy to contribute a good example of usage and the OSG implementation if you give a green light to OSG integration. Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] StateSetSwitchNode suggestion.
Solution suggestion: StateSetSwitchNode Motivation: - I need to render with different shader-programs and state-sets, on the same geometry, during one tick.- Example of when I need this is explained in the EXAMPLE section below. I hope from this e-mail to get feedback:- Is this solution ok?- Can it be improved?- Can I submit it to OSG? And supported in the .ive file format? Solution:==I use Group instances to hold the StateSets.Need to store Group pointers in an array: class StateSetSwitchNode : public Group{ ... private:std::vectorGroup* _nodeArray;}; Need to override the accept function: void StateSetSwitchNode::accept( osg::NodeVisitor nv ){ if( nv.validNodeMask( *this )) { if( osg::Group* group = _nodeArray[ INDEX ] ){// Apply the Group holding the StateSet instead of 'this' pointernv.pushOntoNodePath( group ); nv.apply( *group ); nv.popFromNodePath();}}} I will explain the INDEX and array size later in this e-mail. Each Group pointer in the _nodeArray:- must be set up with the same children as the StateSetSwitchNode instance has.- will not have any parents (unless the element point to the StateSetSwithNode instance).- will by default point to the StateSetSwitchNode instance. The StateSetSwitchNode's own StateSet is the default StateSet. I am conserned about how smart it is to have nodes that is not tied to a parent, but that links to multiple children in the node-tree. It will not affect performance when traversing children, but any parent traversal will go into all of the extra nodes stored in the _nodeArray. They do not have parents, so it is not a large performance penalty. The class will also need function overrides for adding/removing/replacing children as we would need to ensure that each element in the array point to the same children as the StateSetSwitchNode. We do not need to do any code changes to OSG (except for supporting the class in file formats, and to add the class to OSG).Cull-traveresal does not suffer a performance-hit, unless you actually use the StateSetSwitchNode.Using the StateSetSwitchNode comes with a minimal performance-hit for StateSet switching (bitvise and operation + an array lookup). INDEX (and the array size):We need some way to select what array index we want to use. My need is to have multiple camera's render the scene in different ways (by using different state-sets and shader-programs). I decided to do it in a way that need no new code, except for the indexing inside my new class. So I use the traversal-mask which automatically makes it possible for my camera's to decide what StateSets to render the scene with. I allocate some of the least significant bits in the node-mask to be StateSet selectors. So if I use 4 bits for this, then I get a total of 16 different StateSet possibilities in each StateSetSwitchNode. We do not want to check array size during cull-traverse, the size of the array should be in the power of two so that we can bitvise and the index before we read the array. All elements in the array thus need to point to an instance.Any not used elements will point to the default instance (which is the StateSetSwitchNode instance itself). This must be enforced in the StateSetSwitchNode constructor. INDEX (in the above code) is thus replaced by this:(nv.getTraversalMask() (_arraySize-1)) EXAMPLE:=Currently we render our scene three times each tick: - One pre-render camera that render the scene and generate a water-surface reflection image. - One pre-render camera that render the scene and generate a sub-sea refraction image. - One camera that render the scene and use the two textures above to display a good looking sea surface. The pre-render cameras renders only parts of the scene (those nodes that actually need to be in the reflective image and those that is underneath water). Our terrain use four different shaders: - High detail shader on the lod's that are close to camera. - Low detail shader on distant lods. - Lower detail shader on the reflection surface both for close and distant lods. - Special fog shader and lower detail both for close and distant lods when under water. The same goes for vegetation and buildings (which has different shaders than then terrain). We can optionally switch to even simpler shaders when the water-surface start to have higher waves. So dynamically we will be able to speed up reflection and underwater rendering based on wave-height. This would be a matter of changing the cull-mask on the pre-render cameras and writing even simpler shaders. We use the StateSetSwitchNode to select which shaders to render with, and this is controlled by setting the lower bits in the camera's cull-mask.We use bit 0,1 and 2 to
Re: [osg-users] StateSetSwitchNode suggestion.
Hi Robert and David, The Switch node: == I looked at the Switch node but came to a conclusion that I should not use it. Using it would work if I do a cull-callback on it and set the switch position during cull-traverse. The switch position need to be changed according to which camera that is active. The overhead on each switch is higher that my suggestion. My suggestion need one bitvise and operation, and one array lookup. The switch node will need an additional callback call, a bitvise and operation (to figure out what position the switch need to be in), a call to Switch::setValue() which does and if, an array indexed write, and dirtyBound (which I do not know how much penalty you get from, I guess bounding-sphere will need to be re-calculated). And, in addition, the traverse itself runs a loop to figure out which one to call accept onto. The camera itself should not do the switching. It would need an own traversal to do so, and that is too much overhead to do runtime. Solving the problem by only using node-masks: This works fine. But, there is a penalty. In this solution I have to specify a Group node that has one child per StateSet that I want to switch between. Each child will hold its own node-mask so that cull-mask can be used to switch between them. Each child points to the same node(s) as their child(ren). The penalty I get from this is that any traverse that has multiple bits turned on will go through for example all of the nodes with unique mask bits set. This in turn means that their children (which all of them share) will be traversed one time for each of the nodes with unique node masks. Any such traverse will thus cost much more. A switch that is high up in the tree will thus cause a large tree to be traversed many times more than neccessary. Any traverse that force node masks on will also cause the same problem as above (even if I turn off the node masks on the special nodes). I have tried this solution. I saw that initialization process for OSG took much longer time with this kind of node setup. So I think OSG at some point does a scene traverse without a node visitor. Seems like it happens when for example I switch threading mode. Layer solution: Sounds interesting. Will the Layer node use the traverse function to actively chose which child to traverse? One way would be to have each child in an array and somehow ensure that only it gets called when you call Group::traverse(). How would you do that? Viggo Date: Wed, 19 Nov 2008 13:44:49 + From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] StateSetSwitchNode suggestion. Hi Viggo, One thing I have considered in the path is to have an osg::Layer node which is a group that has a StateSet per child, this does sound kinda similar to what you are after. You can implement this without any custom nodes just be arranging Group/Switch and StateSet's as required - its just a bit more wordy w.r.t lines of code. Robert. On Wed, Nov 19, 2008 at 1:32 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Solution suggestion: StateSetSwitchNode Motivation: - I need to render with different shader-programs and state-sets, on the same geometry, during one tick. - Example of when I need this is explained in the EXAMPLE section below. I hope from this e-mail to get feedback: - Is this solution ok? - Can it be improved? - Can I submit it to OSG? And supported in the .ive file format? Solution: == I use Group instances to hold the StateSets. Need to store Group pointers in an array: class StateSetSwitchNode : public Group { ... private: std::vectorGroup* _nodeArray; }; Need to override the accept function: void StateSetSwitchNode::accept( osg::NodeVisitor nv ) { if( nv.validNodeMask( *this )) { if( osg::Group* group = _nodeArray[ INDEX ] ) { // Apply the Group holding the StateSet instead of 'this' pointer nv.pushOntoNodePath( group ); nv.apply( *group ); nv.popFromNodePath(); } } } I will explain the INDEX and array size later in this e-mail. Each Group pointer in the _nodeArray: - must be set up with the same children as the StateSetSwitchNode instance has. - will not have any parents (unless the element point to the StateSetSwithNode instance). - will by default point to the StateSetSwitchNode instance. The StateSetSwitchNode's own StateSet is the default StateSet. I am conserned about how smart it is to have nodes that is not tied to a parent, but that links to multiple children in the node-tree. It will not affect performance when traversing children, but any parent traversal will go into all of the extra nodes stored in the _nodeArray. They do not have parents, so it is not a large performance penalty. The class will also need function overrides for adding/removing/replacing children
[osg-users] I need to write an ive plugin.
Hi, I have written a new class which inherit from osg::Group. I have written a plugin for the .osg file format, that works fine. I need to write a plugin for the .ive file format. My initial thought was to copy-paste the ive::LOD code and replace the actual read/write parts according to my needs. I am compiling this code inside one of my own projects. I added includes to the ive/ReadWrite.h and ive/Group.h. The code compiles. Linking results in errors on unresolved externals. I could not find a library to link with, and I see that the header-files inside ive does not have export statements. So I think I have gone ahead doing this in a very wrong way. Does anybody have an example or directions for me on how to do this? I also wonder how the code written as an ive plugin gets called when reading/writing ive files. Do I need to register my class somewhere? Regards, Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] performance issues with RTT
Hi, Cool, nice to know that this happen for others too. Additional note: - FRAME_BUFFER instead of FRAME_BUFFER_OBJECT only works if you want to render to only one texture. - Output will be stored both on frame-buffer and on the texture you point to (at least this happens for me). Cheers, Viggo Date: Mon, 8 Sep 2008 21:45:46 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] performance issues with RTT Hi, after reading Viggo's post I changed my implementation to use FRAME_BUFFER instead of FRAME_BUFFER_OBJECT and this did the job for me. At least on the fast machine (the one I posted details about earlier) it now runs fast enough without having to make sacrifices resolution-wise and this is all I could hope for at the moment ;) If I find out other ways to speed every up or deal with this problem I will let you know. Thanks, Steffen ___ Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 30 Tage kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=00 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Med MSN på mobilen holder du deg oppdatert. http://info.mobile.no.msn.com/pc/default.aspx?ocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] osgEphemeris
Hi Adrian, I have experienced a similar problem. First, check that you set a decent radius for the Ephemeris dome. Example: 5000 meters... Ensure that your far-clip is at least that far away. Ephemeris can be set to follow camera. If you do not set this then you must have camera inside the Ephemeris dome. If you set it to follow camera, then you are supposed to see it all the time. I did however get into a struggle with it where the dome disappeared at seemingly random times. It depended on where my camera was and the radius of the Ephemeris dome. I learnt that if the Ephemeris dome radius was low (5000meters) then it would often pop away (as in not visible) when I moved my camera away from world centre. The problem increased the further away I went. At 1meters from the world centre I would almost never see the Ephemeris dome. Ephemeris was in fact following my camera, but it does this in a special way. It actually 'update' its position after culling is almost done. So at times it may be culled even though you say it shall follow camera. The solution was to write a node-visitor that turns off culling for Ephemeris's root node and all of its child nodes. I am not sure if this is the same problem as you have but you can try it. Node visitor's apply function looks like this: void DisableCullingVisitor::apply( osg::Node node ) { node.setCullingActive( false ); traverse( node ); } Ensure that it is set up to traverse all children. Hope it helps, Viggo Date: Tue, 9 Sep 2008 14:29:04 +0200From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: [osg-users] osgEphemeris Hi just come back to the good old osgEphemeris, i can not see any sky texture, what is wrong. do someone know the bug. adrian -- Adrian Egli _ Med MSN på mobilen holder du deg oppdatert. http://info.mobile.no.msn.com/pc/default.aspx?ocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] performance issues with RTT
Good morning, I did some RTT research some weeks ago and found an interesting problem: Running the pre-render OSG example with osg::Camera::RenderTargetImplementation set to osg::Camera::FRAME_BUFFER results in a nice 60 fps on my hardware. Running the pre-render OSG example with osg::Camera::RenderTargetImplementation set to osg::Camera::FRAME_BUFFER_OBJECT results in a not so nice 30 fps on my hardware. Please note that the only difference in the two runs is the FRAME_BUFFER vs FRAME_BUFFER_OBJECT setting. My own RTT application suffer from the exact same problem (complexity of my scene has no impact on this problem). I am at some point in time, going to test this problem with different kinds of hardware and drivers, just to see if this is a driver/hardware problem. Robert told me he had seen no such loss in framerate on Mac and Unix. I am running Windows. I have seen a very similar problem when working on Direct-X earlier, so I believe this is a driver or hardware issue, but I do not know for sure yet. Regards, Viggo Date: Sun, 7 Sep 2008 09:27:03 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] performance issues with RTT Good morning, I'm just rendering to texture, no images or pixel reading involved. Regards, Steffen Schon gehört? Bei WEB.DE gibt' s viele kostenlose Spiele: http://games.entertainment.web.de/de/entertainment/games/free/index.html ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Med MSN på mobilen holder du deg oppdatert. http://info.mobile.no.msn.com/pc/default.aspx?ocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Help! I have a render target problem.
Hi, I am setting up my camera to render to a texture with the following commands: camera.setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT ); camera.attach( osg::Camera::BufferComponent( osg::Camera::COLORBUFFER0 ), myBuffer ); camera.setDrawBuffer( GL_COLOR_ATTACHMENT0_EXT ); The camera now renders to the texture myBuffer which is a osg::TextureRectangle*. After many ticks, I decide to stop rendering to the texture, so I run the following commands: camera.detach( osg::Camera::BufferComponent( osg::Camera::COLORBUFFER0 ) ); camera.setDrawBuffer( GL_NONE ); camera.setReadBuffer( GL_NONE ); camera.setRenderTargetImplementation( osg::Camera::FRAME_BUFFER ); I do not delete the myBuffer pointer. I continue to read from myBuffer even though I do not write to it anymore. PROBLEM: The program continues to the myBuffer texture, and not to the frame-buffer. Regards, Viggo _ Finn dine favorittklipp på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Frame-rate issue on RTT
Hi, I have seen a strange thing about framerate... I have a complex city where I have 20fps (not optimized yet)... I set the following: - Render target implementation = osg::Camera::FRAME_BUFFER - I attach a RGBA buffer that is the same size as the frame-buffer. Result: - Framerate stays at 20fps. - Colors are written to the frame-buffer. - Colors are also written to the RGBA surface I attached to the camera. - I thus have 20fps and two copies of my scene-image. I set the following: - Render target implementation = osg::Camera::FRAME_BUFFER_OBJECT - I attach a RGBA buffer that is the same size as the frame-buffer. Result: - Framerate drops to 10fps. - Colors are written to the RGBA surface I attached to the camera. - Nothing is written to the frame-buffer. - I thus have 10fps and only one copy of the scene-image. So, one mode give me more data, and high framerate while the other mode give me less data and lower framerate. This sounds odd. Another test: - Render target implementation = osg::Camera::FRAME_BUFFER_OBJECT - I attach a RGBA buffer that is the same size as the frame-buffer. - I attach another RGBA buffer that is the same size as the frame-buffer. - I set up the pixel-shaders to output some extra data from the scene to the second buffer. Result: - Framerate stays at 10fps. - Colors are written to the RGBA surface I attached to the camera. - The second buffer gets the data I wrote to it. - Nothing is written to the frame-buffer. - I thus have 10fps and two copies of the scene-image. My conclusion: - I assume that setting FRAME_BUFFER_OBJECT means that the hardware change to some kind of pipeline that is much slower than normal. - (I am using a NVIDIA Quadro FX 1500 card). And my question to you: - Does anybody know if my conclusion is all that is to it, or is there something I can do that will speed up things? Regards, Viggo _ Finn dine favorittklipp på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Help! I have a render target problem.
Date: Wed, 13 Aug 2008 17:27:14 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Help! I have a render target problem. No bug, you are just trying to do something in a completely wrong way As I said NodeMask works just fine, if you want another alternative you can just put a Switch node above the RTT Camera. Ok, I thought it was a bug. I understand now that setting a camera to RTT is not reversable, and it is probably a good reason for that. It really sounds like you are making a relatively straight forward problem really complicated. Have 100 RTT Camera's in the scene graph is not a big issue for the OSG. Trying to reuse Camera can be done but it's really awkward, and is more likely to cause you more grief than you solve anything. Also you having more grief pushes the mailing list to try and help you doing something which is in the end rather counter-productive, personally I have better things to do. Robert. I some times experiment multiple way to complete a goal. It give me a deeper knowledge of (the for me new technology) OSG, and it increase the chance of a good end result. I usually avoid messaging on this list when I am on a clearly experimental track. This time it was something I believed to be a bug. Regards, Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Frame-rate issue on RTT
Hi Robert, I see a drop from 60fps to 30 fps in the osgPreRender example, when I switch from frame buffer to frame buffer object. I am going to try to run that example on some different hardware/drivers to see if this happens only on some of them. Viggo Date: Wed, 13 Aug 2008 17:31:28 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Frame-rate issue on RTT Hi Viggo, I haven't yet seen performance delta's between rendering scenes to a frame buffer and a frame buffer object, but my own testing is rather limited to Linux and Nvidia these data, which an occasional foray on the Mac. There is plenty unsaid about your setup that could be causing the performance delta, there is a chance that it is just the FBO that is push the OpenGL driver on to a slow path, but also just as much chance that something else is the cause. In general I'd say 20fps frame rate is very poor and needs to be addressed before other areas such as FBOs are looked at. You need to start by looking at what is the bottlenecks - CPU, GPU etc. The osgviewer stats can help reveal this. Robert. On Wed, Aug 13, 2008 at 2:55 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I have seen a strange thing about framerate... I have a complex city where I have 20fps (not optimized yet)... I set the following: - Render target implementation = osg::Camera::FRAME_BUFFER - I attach a RGBA buffer that is the same size as the frame-buffer. Result: - Framerate stays at 20fps. - Colors are written to the frame-buffer. - Colors are also written to the RGBA surface I attached to the camera. - I thus have 20fps and two copies of my scene-image. I set the following: - Render target implementation = osg::Camera::FRAME_BUFFER_OBJECT - I attach a RGBA buffer that is the same size as the frame-buffer. Result: - Framerate drops to 10fps. - Colors are written to the RGBA surface I attached to the camera. - Nothing is written to the frame-buffer. - I thus have 10fps and only one copy of the scene-image. So, one mode give me more data, and high framerate while the other mode give me less data and lower framerate. This sounds odd. Another test: - Render target implementation = osg::Camera::FRAME_BUFFER_OBJECT - I attach a RGBA buffer that is the same size as the frame-buffer. - I attach another RGBA buffer that is the same size as the frame-buffer. - I set up the pixel-shaders to output some extra data from the scene to the second buffer. Result: - Framerate stays at 10fps. - Colors are written to the RGBA surface I attached to the camera. - The second buffer gets the data I wrote to it. - Nothing is written to the frame-buffer. - I thus have 10fps and two copies of the scene-image. My conclusion: - I assume that setting FRAME_BUFFER_OBJECT means that the hardware change to some kind of pipeline that is much slower than normal. - (I am using a NVIDIA Quadro FX 1500 card). And my question to you: - Does anybody know if my conclusion is all that is to it, or is there something I can do that will speed up things? Regards, Viggo Windows Live Messenger på mobilen. Hold kontakten hvor som helst når som helst. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Med MSN på mobilen holder du deg oppdatert. http://info.mobile.no.msn.com/pc/default.aspx?ocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Render capability checking, is this a good way?
Hi, I need to automatically check the rendering capabilities of the current hardware and chose rendering techniques from that. I want to find the best practical way to do this. I have looked into the OSGShaderTerrain example. It uses a osg::GraphicsOperation class to do actual capability checking. My application creates a osgViewer::CompositeViewer instance. Basically this is the initialization in pseudocode: viewer = new osgViewer::CompositeViewer(); view = new osgViewer::View; viewer-addView( view ); view-setSceneData( myRootNode ); view-setUpViewInWindow( 100, 100, 1024, 768 ); viewer-run(); Now say that I have 3 different levels of rendering techniques: 2 = Complex 1 = Normal 0 = Simple If I shall use the same method as done in the OSGShaderTerrain example, then I would write the code like this: level = 3 while( level-- ) { viewer = new osgViewer::CompositeViewer(); capabilityChecker = new myTestSupportOperationClass( level ); // myTestSupportOperationClass is a specialized version of osg::GraphicsOperation. viewer-setRealizeOperation( capabilityChecker ); view = new osgViewer::View; viewer-addView( view ); view-setSceneData( myRootNode ); view-setUpViewInWindow( 100, 100, 1024, 768 ); viewer-realize(); if( !capabilityChecker-_supported ) { ..cleanup so we are ready to initialize again... continue; // Check one lower level... } viewer-run(); break; // We did manage to run, so break the loop } I wonder if this is a good way to organize the capability testing? Is there other ways I should explore? QITODMWTAM-section (Questions I think OSG developers may want to ask me): -- Q: What capabilities are you trying to check? A: I want to check if the current hardware can run shaders. I would also check basic stuff like how many texture units I can use. I would also check if I can render to an offscreen surface. I plan to give each special-effect that I write a capability checking function which I should be able to call from my specialization of the osg::GraphicsOperation class. Each piece of code I have that require checking will then be able to stop the system from using them if the capability needs are not met. Cheers, Viggo _ Få Windows Live Messenger på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] I want to read one pixel from a texture in video memory, back to system memory.
Hi Art, I took a look into the UnitOutCapture.cpp file. It uses the Image::readImageFromCurrentTexture function. I assume it reads the entire texture back to system-memory. I may be able to use that... But, I assume that if I send a compressed texture to video-memory, then I will get a compressed texture back. The Image::getColor function does not support compressed textures. I have written a node-visitor that prevents OSG from deleting the Image pointers of textures in node-files that my application reads, so I have solved my problem as long as I am not using compressed textures. My hope now is that IF my program encounters a compressed texture then it would be able to read a pixel from video-memory through glReadPixels. That function is capable of reading only one pixel or a few pixels, so I assume that reading from a compressed texture means that it will decompress in order to read the pixel. Doing so would make my intersection code robust as it would handle node-files that contain compressed images. Cheers, Viggo From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Date: Fri, 1 Aug 2008 13:45:03 +0200 Subject: Re: [osg-users] I want to read one pixel from a texture in video memory, back to system memory. Hi Viggo, Have you tried to setup the glReadBuffer properly first. Take a look into osgPPU, this has already the functionality to read from textures (UnitOutCapture) and to be used for post-processing calculation like you have. Maybe this could help you to solve the problem. Best regards, Art On Thursday 31 July 2008 08:48:19 Viggo Løvli wrote: Hi, I am trying to write an intersect post-processing function that will read the alpha value of the hit-location. I have so far managed to get the texture coordinates for each collision point. I have also managed to find which texture (in unit 0) that is used by the object that I collide with. I have a pointer to the texture: osg::Texture2D* texture; The scene I am intersecting is read from a pre-created file and all osgImage data is automatically deleted. So, I can not always use texture-getImage() and read the pixels from there. In the cases where getImage returns a NULL pointer: osg::Texture::TextureObject* to = texture-getTextureObject( 0 ); glBindTexture( GL_TEXTURE_2d, to-_id ); unsigned char data[4]; glReadPixels( xLocation, yLocation, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data ); glBindTexture( GL_TEXTURE_2D, 0 ); This code compiles and it runs. The problem is that it always returns the same values for RGB and A, nomatter what my xLocation and yLocation is. Can anybody see what I am doing wrong? Regards, Viggo _ Morsomme klipp, kjendiser og mer på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] I want to read one pixel from a texture in video memory, back to system memory.
Hi, I am trying to write an intersect post-processing function that will read the alpha value of the hit-location. I have so far managed to get the texture coordinates for each collision point. I have also managed to find which texture (in unit 0) that is used by the object that I collide with. I have a pointer to the texture: osg::Texture2D* texture; The scene I am intersecting is read from a pre-created file and all osgImage data is automatically deleted. So, I can not always use texture-getImage() and read the pixels from there. In the cases where getImage returns a NULL pointer: osg::Texture::TextureObject* to = texture-getTextureObject( 0 ); glBindTexture( GL_TEXTURE_2d, to-_id ); unsigned char data[4]; glReadPixels( xLocation, yLocation, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data ); glBindTexture( GL_TEXTURE_2D, 0 ); This code compiles and it runs. The problem is that it always returns the same values for RGB and A, nomatter what my xLocation and yLocation is. Can anybody see what I am doing wrong? Regards, Viggo _ Morsomme klipp, kjendiser og mer på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Pick color from scene
Hi, I want to do a ray-intersection toward a tree model (as an example) and ignore the hit-location if it contain zero alpha value. Basically this is a line-of-sight-check that take textures into consideration. I think a good way is to use the existing intersection system and thus produce a list of hit-locations. I will then for each hit location need to figure out what alpha-value it holds. I can then ignore those hit-locations that we can see through. So, I am looking for a scene-graph color-picker. Does anybody have something like this that they want to share? Note: I am not looking for something that use hardware to render part of the scene and then a read-back from hardware. This should be CPU only :-) Regards, Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] View Frustrum Culling
Hi Kaiser, I have just done parts of what you need. Make yourself a group node which you place as the root of the scene-graph that you need to list objects from. Override it's traverse function and post process the cull-visitor: void yourGroupClass::traverse( osg::NodeVisitor nv ) { osg::Group::traverse( nv ); if( nv.getVisitorType() == osg::NodeVisitor::CULL_VISITOR ) { osgUtil::CullVisitor* cv = dynamic_castosgUtil::CullVisitor*( nv ); if( cv ) { // Do your stuff here // You should be able to find what you need inside cv-RenderStage-RenderBinList... // The cull-traverse is done, so everything that is visible must be stored in the render-bins. // I am pretty sure you should be able to find some information in these structures which points you to your objects. } } }Cheers, Viggo Date: Tue, 29 Jul 2008 09:56:59 +0200From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: [osg-users] View Frustrum Culling Hello everyone, Maybe it sounds basic. I have camera that will never change place. I have a database with one Object/onFile-Mapping. So next time I load my Scene id like to load only relevant Objects to shorten Loading time. Simple Task: I need to get a list of all VISIBLE Objects in the ViewFrustrum. Can this be done with the cull-Traverser. The docs say it collects all the objects in a special order. Are these objects all objects in the end or only the visible? How can I get the visbile Nodes? What things are to be taken care of? Greetings? _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Pick color from scene
Hi Brede, Thanx for the info, it looks like it will give me the texture coordinates of the polygon that is hit, and that is a very good start :-) Regards, Viggo Date: Tue, 29 Jul 2008 12:14:18 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Pick color from scene Hi Viggo, Some time ago there was a thread called How to retrieve a U, V texture coordinate from a picking action ?. http://thread.gmane.org/gmane.comp.graphics.openscenegraph.user/26306/focus=26352 Regrads, Brede On Tue, Jul 29, 2008 at 9:23 AM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I want to do a ray-intersection toward a tree model (as an example) and ignore the hit-location if it contain zero alpha value. Basically this is a line-of-sight-check that take textures into consideration. I think a good way is to use the existing intersection system and thus produce a list of hit-locations. I will then for each hit location need to figure out what alpha-value it holds. I can then ignore those hit-locations that we can see through. So, I am looking for a scene-graph color-picker. Does anybody have something like this that they want to share? Note: I am not looking for something that use hardware to render part of the scene and then a read-back from hardware. This should be CPU only :-) Regards, Viggo Få Hotmail du også. Windows Live Hotmail nå med 5000 MB gratis lagringsplass. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] sky model tracking the camera...
Hi Shayne, Add your sky node where-ever you want it in your scene-graph (not as child of camera). Override the sky's transform node's traverse function and do something like this: void yourNode::traverse( osg::NodeVisitor nv ) { switch( nv.getVisitorType() ) { case osg::NodeVisitor::CULL_VISITOR: setPosition( nv.getEyePoint() ); break; } } Your node will be set to the camera position on each cull-traverse. It will therefore work properly nomatter how many camera or render passes you add to your scene. Hope it helps, Viggo Date: Tue, 29 Jul 2008 18:14:25 -0600From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: [osg-users] sky model tracking the camera... Hello, I would like to have my sky model track the camera position so that as the camera moves, the sky model moves with it. To do this, would I add the sky model transform as a child of the cameraNode? Does anyone have any code snippets that may demonstrate how I might do this? Thanks in advance… -Shayne _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] sky model tracking the camera...
Forgot one thing... At end of traverse function: osg::Group::traverse( nv ); My sky is a osg::Group which hold a pointer to the postionTransfor object, so I am therefore calling osg::Group::traverse and not something else :-) Viggo From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Wed, 30 Jul 2008 07:56:01 +0200Subject: Re: [osg-users] sky model tracking the camera... Hi Shayne, Add your sky node where-ever you want it in your scene-graph (not as child of camera). Override the sky's transform node's traverse function and do something like this: void yourNode::traverse( osg::NodeVisitor nv ){ switch( nv.getVisitorType() ){case osg::NodeVisitor::CULL_VISITOR: setPosition( nv.getEyePoint() );break;}} Your node will be set to the camera position on each cull-traverse. It will therefore work properly nomatter how many camera or render passes you add to your scene. Hope it helps,Viggo Date: Tue, 29 Jul 2008 18:14:25 -0600From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: [osg-users] sky model tracking the camera... Hello, I would like to have my sky model track the camera position so that as the camera moves, the sky model moves with it. To do this, would I add the sky model transform as a child of the cameraNode? Does anyone have any code snippets that may demonstrate how I might do this? Thanks in advance… -Shayne Få Hotmail du også. Windows Live Hotmail nå med 5000 MB gratis lagringsplass. _ Hold deg oppdatert med MSN på mobilen. http://info.mobile.no.msn.com/pc/default.aspx?ocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, My current code instances a new one each cull traverse. I wonder if that means I get a sizeof(osgUtil::RenderBin) memory leak each cull-traverse? Anyway, I am going to try to multi-buffer it. I should be able to count the number of cull-traverse calls I get for each update tranverse. That (I guess) should tell me automatically how many instances of osgUtil::RenderBin that I would need. I will multiply it by two for double buffering and then it should work like a dream. Cheers, Viggo Date: Thu, 24 Jul 2008 17:53:57 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) On Thu, Jul 24, 2008 at 3:08 PM, Viggo Løvli [EMAIL PROTECTED] wrote: The only thing that annoy me now is the call to new(). Creating one bin as a member variable in my class and using it instead of the call to new cause the system to crash. Double buffering I guess. So maybe I can fix this by having two RenderBin instances in my class, and use them every other call? But, I suspect that it may not be enough? Would I need two per cull call that is ran per frame? Say that I in the future render with 4 different cameras and cull the world 4 times. Would I then need 4*2 bin instances to use instead of calling new()? The OSG's rendering backend is both very dynamic (created on demand on each frame) and flexible (it can be a huge range of bin configurations, potentially different on each frame) and the threading and multiple graphics context rendering are all thrown into this mix. This makes reuse of data something you have to be very careful about, in your case you'd either need to create on each new frame or multi-buffer. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, Hi Viggo, The rendering backend uses ref_ptr's so there shouldn't be any leak, assigning the new RenderBin will lead to the previous one being deleted. Yep I figured out that one :-) Rather than second guess what will be need might I suggest you maintain a recycling list of ref_ptr to your custom RenderBin, then traverse this list to find an entry that has a referenceCount() of one, then take ownership of this. I took into usage a std::list which starts off empty. I am currently counting how many times cull-traverse is called and increasing the list at need. Your idea of checking the reference count is better. It will make the system more robust. I will continue using a std::list for this. I will keep track of what was the last used element of the list, so when I need a new one then I will traverse the list from that point. This should increase the chance of finding a free entry immediately. If I parse through the whole list, then I will insert a new element to the list and use that one. The list will thus grow to the maximum needed size and stay there until the class is deleted. Future changes of number of cameras or what ever re-configurations that can cause one thread to hold data longer will thus automatically work. I am also ensuring that the original RenderBinList of RenderStage is not changed anywhere else than for element 10. I used to add a new bin to element 9, but that may be in usage already. Element 10 will instead point to a new bin that contain it's own bin #9 and #10. Both will point to the original content that RenderStage pointed to in it's bin #10. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, I have a very strange bug. The code I have written to render one bin twice works fine in the project code-base that I am working on. I took the class and integrated into the OSGForest example, and there it does not work as expected. In the OSGForest example:- The bin is rendered two times, as expected.- The state-set that is added to only one of the bins are now used when both bins are rendered. And there is the bug :-/ I would appreciate it if you could take a look at the code and try it out. I have attached my class to this mail, and below here is the new main function for OSGForest. Just keep the rest of the osgforest.cpp file as it is. Only add the include and the green code at the bottom of main. #include TransparencyGlitchFixNode.h int main( int argc, char **argv ) { // use an ArgumentParser object to manage the program arguments. osg::ArgumentParser arguments(argc,argv); // construct the viewer. osgViewer::Viewer viewer(arguments); float numTreesToCreates = 1; arguments.read('--trees',numTreesToCreates); osg::ref_ptrForestTechniqueManager ttm = new ForestTechniqueManager; viewer.addEventHandler(new TechniqueEventHandler(ttm.get())); viewer.addEventHandler(new osgGA::StateSetManipulator(viewer.getCamera()-getOrCreateStateSet())); // add model to viewer. TransparencyGlitchFixNode* root = new TransparencyGlitchFixNode(); root-addChild( ttm-createScene((unsigned int)numTreesToCreates) ); viewer.setSceneData( root ); return viewer.run(); } Here is what to expect: - The forest will be rendered twice. - First pass will be additive blend without depth-buffer write. - Second pass shall be a normal render of the forest. - The bug is so that the state-set is used both times, so both get additive-blend. The additive-blend is something I have added only to ease the visual debugging. The final code shall only have the state for turning off depth-buffer write. If everything works smoothly then you are supposed to see the forest as normal but with high-lighting due to the additive blend at all places where the trees are transparent. Regards,Viggo From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Fri, 25 Jul 2008 10:42:08 +0200Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) Hi Robert, Hi Viggo, The rendering backend uses ref_ptr's so there shouldn't be any leak, assigning the new RenderBin will lead to the previous one being deleted. Yep I figured out that one :-) Rather than second guess what will be need might I suggest you maintain a recycling list of ref_ptr to your custom RenderBin, then traverse this list to find an entry that has a referenceCount() of one, then take ownership of this. I took into usage a std::list which starts off empty. I am currently counting how many times cull-traverse is called and increasing the list at need. Your idea of checking the reference count is better. It will make the system more robust.I will continue using a std::list for this.I will keep track of what was the last used element of the list, so when I need a new one then I will traverse the list from that point. This should increase the chance of finding a free entry immediately.If I parse through the whole list, then I will insert a new element to the list and use that one. The list will thus grow to the maximum needed size and stay there until the class is deleted. Future changes of number of cameras or what ever re-configurations that can cause one thread to hold data longer will thus automatically work. I am also ensuring that the original RenderBinList of RenderStage is not changed anywhere else than for element 10. I used to add a new bin to element 9, but that may be in usage already. Element 10 will instead point to a new bin that contain it's own bin #9 and #10. Both will point to the original content that RenderStage pointed to in it's bin #10. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org Få Hotmail du også. Windows Live Hotmail nå med 5000 MB gratis lagringsplass. _ Morsomme klipp, kjendiser og mer på MSN Video. http://video.msn.com/?mkt=nb-nofrom=HMTAG#include TransparencyGlitchFixNode.h #include osg/Depth #include osg/BlendFunc #include osg/StateSet #include osgUtil/CullVisitor // /*! * \par Actions: * - Creates one state-set that will be used multiple places. * - Creates two helper-bins to be ready for use (see traverse function). */ TransparencyGlitchFixNode::TransparencyGlitchFixNode() : osg::Group () , _stateSet ( 0 ) { // Create the state-set that we use to turn off depth-buffer write
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, Completely understandable :-) Regards, Viggo Date: Fri, 25 Jul 2008 14:00:09 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) Hi Viggo, I am trying to get a release out the door. I'm afraid I don't have the the time tot go chasing up experimental user code. Robert. On Fri, Jul 25, 2008 at 1:33 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Robert, I have a very strange bug. The code I have written to render one bin twice works fine in the project code-base that I am working on. I took the class and integrated into the OSGForest example, and there it does not work as expected. In the OSGForest example: - The bin is rendered two times, as expected. - The state-set that is added to only one of the bins are now used when both bins are rendered. And there is the bug :-/ I would appreciate it if you could take a look at the code and try it out. I have attached my class to this mail, and below here is the new main function for OSGForest. Just keep the rest of the osgforest.cpp file as it is. Only add the include and the green code at the bottom of main.#include TransparencyGlitchFixNode.h int main( int argc, char **argv ) { // use an ArgumentParser object to manage the program arguments. osg::ArgumentParser arguments(argc,argv); // construct the viewer. osgViewer::Viewer viewer(arguments); float numTreesToCreates = 1; arguments.read('--trees',numTreesToCreates); osg::ref_ptrForestTechniqueManager ttm = new ForestTechniqueManager; viewer.addEventHandler(new TechniqueEventHandler(ttm.get())); viewer.addEventHandler(new osgGA::StateSetManipulator(viewer.getCamera()-getOrCreateStateSet())); // add model to viewer. TransparencyGlitchFixNode* root = new TransparencyGlitchFixNode(); root-addChild( ttm-createScene((unsigned int)numTreesToCreates) ); viewer.setSceneData( root ); return viewer.run(); }Here is what to expect: - The forest will be rendered twice. - First pass will be additive blend without depth-buffer write. - Second pass shall be a normal render of the forest. - The bug is so that the state-set is used both times, so both get additive-blend. The additive-blend is something I have added only to ease the visual debugging. The final code shall only have the state for turning off depth-buffer write. If everything works smoothly then you are supposed to see the forest as normal but with high-lighting due to the additive blend at all places where the trees are transparent. Regards, Viggo From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Date: Fri, 25 Jul 2008 10:42:08 +0200 Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) Hi Robert, Hi Viggo, The rendering backend uses ref_ptr's so there shouldn't be any leak, assigning the new RenderBin will lead to the previous one being deleted. Yep I figured out that one :-) Rather than second guess what will be need might I suggest you maintain a recycling list of ref_ptr to your custom RenderBin, then traverse this list to find an entry that has a referenceCount() of one, then take ownership of this. I took into usage a std::list which starts off empty. I am currently counting how many times cull-traverse is called and increasing the list at need. Your idea of checking the reference count is better. It will make the system more robust. I will continue using a std::list for this. I will keep track of what was the last used element of the list, so when I need a new one then I will traverse the list from that point. This should increase the chance of finding a free entry immediately. If I parse through the whole list, then I will insert a new element to the list and use that one. The list will thus grow to the maximum needed size and stay there until the class is deleted. Future changes of number of cameras or what ever re-configurations that can cause one thread to hold data longer will thus automatically work. I am also ensuring that the original RenderBinList of RenderStage is not changed anywhere else than for element 10. I used to add a new bin to element 9, but that may be in usage already. Element 10 will instead point to a new bin that contain it's own bin #9 and #10. Both will point to the original content that RenderStage pointed to in it's bin #10. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org Få Hotmail du også. Windows Live Hotmail nå med
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert :-D Thanx a lot for pointing in the right direction!!! I can now enforce render-bin 10 to be rendered twice each frame with the needed stateset changes. No node-mask stuff is needed. This is a completely stand-alone fix :-) This is what I did: Camera is set up with a callback: camera.setPreDrawCallback( new MyCallback() ); The callback struct's operator () looks like this: virtual void operator () (osg::RenderInfo renderInfo) const { osg::Camera* camera = renderInfo.getCurrentCamera(); if( !camera ) { return; } osgViewer::Renderer* renderer = dynamic_castosgViewer::Renderer*( camera-getRenderer() ); if( !renderer ) { return; } // HACK: This loop should not be here... // Need to figure out which scene-view that is used (0 or 1). // Renderer::draw() does it this way: sceneView = _drawQueue.takeFront() // _drawQueue is protected and not accessible through class methods. // This hack means we do the job below twice each frame. // for( int i=0; i2; i++ ) { osgUtil::SceneView* sceneView = renderer-getSceneView( i ); if( !sceneView ) { return; } osgUtil::RenderStage* renderStage = sceneView-getRenderStage(); if( !renderStage ) { return; } osgUtil::RenderBin::RenderBinList binList = renderStage-getRenderBinList(); if( binList.find(10) != binList.end() ) { // Clone bin 10 osgUtil::RenderBin* clonedBin = new osgUtil::RenderBin( *(binList[10].get()) ); // Clone the stateset // TODO: Need to check that getStateSet does not return NULL. osg::StateSet* stateSet = new osg::StateSet( *(clonedBin-getStateSet()) ); // Ensure the cloned stateset is used in the cloned bin clonedBin-setStateSet( stateSet ); // Cloned bin shall not write to the depth-buffer stateSet-setMode( GL_DEPTH_TEST, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE ); stateSet-setAttributeAndModes( new osg::Depth(osg::Depth::LESS, 0.0, 1.0, false), osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE ); // Ensure cloned bin is rendered before bin 10. binList[9] = clonedBin; } } } I am not sure how to solve the 0..1 loop marked by // HACK: in the source. Do you know how I can know what SceneView that is used? Do you see any other problems with this code? Regards, Viggo Date: Wed, 23 Jul 2008 15:01:50 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Is it possible to know when the node-graph is 'dirty'? Hi Viggo, I think you are on totally wrong take w.r.t trying to track changes in the scene graph, for what is effectively just a custom transparent renderbin setup, and has little to do with the scene itself. The way you should tackle it is to customize the rendering backend so that the bins you require are built for you. One one for instance would be to post process the RenderStage and its contents after the CullVisitor has filled it in. Robert. _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, I followed your advice added the code into the traverse function of my root node. Much of the code could now be removed, and I got rid of the 0..1 loop-hack that I had to do earlier. I think the new code looks very good. Do you see anything you would do differently? The new code looks like this: void MyRootNodeClass::traverse( osg::NodeVisitor nv ) { osg::Group::traverse( nv ); // Clone render-bin 10 if this is a cull visitor if( nv.getVisitorType() == osg::NodeVisitor::CULL_VISITOR ) { osgUtil::CullVisitor* cv = dynamic_castosgUtil::CullVisitor*( nv ); if( cv ) { // Act if we have a RenderStage pointer if( osgUtil::RenderStage* renderStage = cv-getRenderStage() ) { // Get the render-bin list osgUtil::RenderBin::RenderBinList binList = renderStage-getRenderBinList(); if( binList.find(10) != binList.end() ) { // Clone bin 10 osgUtil::RenderBin* clonedBin = new osgUtil::RenderBin( *(binList[10].get()) ); // Clone the state-set osg::StateSet* originalStateSet = clonedBin-getStateSet(); osg::StateSet* stateSet = (originalStateSet) ? new osg::StateSet( *(originalStateSet) ) : new osg::StateSet(); // Ensure the cloned state-set is used in the cloned bin clonedBin-setStateSet( stateSet ); // Cloned bin shall not write to the depth-buffer stateSet-setMode( GL_DEPTH_TEST, osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE ); stateSet-setAttributeAndModes( new osg::Depth(osg::Depth::LESS, 0.0, 1.0, false), osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE ); // Ensure cloned bin is rendered before bin 10. binList[9] = clonedBin; } } } } } Regards, Viggo Date: Thu, 24 Jul 2008 11:19:21 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) Hi Viggo, I'd do this trick using a CullCallback on the topmost node of sub graph that you won't to repeat rather than a pre draw callback. The CullVisitor keeps track of the current RenderStage. Robert. _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?)
Hi Robert, I have changed the code a bit. Now I create my statset (_myPreCreatedStateSet) at class construction. I no longer clone bin 10. I now only do this inside the inner check: if( binList.find(10) != binList.end() ) { binList[9] = new osgUtil::RenderBin(); binList[9]-setStateSet( _myPreCreatedStateSet ); (binList[9]-getRenderBinList())[9] = binList[10]; } The only thing that annoy me now is the call to new(). Creating one bin as a member variable in my class and using it instead of the call to new cause the system to crash. Double buffering I guess. So maybe I can fix this by having two RenderBin instances in my class, and use them every other call? But, I suspect that it may not be enough? Would I need two per cull call that is ran per frame? Say that I in the future render with 4 different cameras and cull the world 4 times. Would I then need 4*2 bin instances to use instead of calling new()? Regards, Viggo Date: Thu, 24 Jul 2008 12:52:06 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Robert: I figured it out :-) (was: Is it possible to know when the node-graph is 'dirty'?) Hi Viggo, You could possible experiment with reuse the bin rather than cloning it. Inserting a two bins for bin 9 and 10, then have the original transparent bin nested within them. I can't recall the exact details but I think I've tried a trick like this in the past. This would also allow you to reuse a fixed StateSet on every frame rather than creating one. Robert. _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Is it possible to know when the node-graph is 'dirty'?
Hi Peter, What you say is true in most cases. The 'simplest' way to do this would be to tell my system that the node-tree need re-formatting each time we call OSG to add children. That would require two calls each time, and also requre humans to remember to implement both calls. This opens for human errors, which at least I dont want (I am lazy). I want to make my module (the one that need to format the node-masks) independent of the rest of the system, and to ensure that the rest of the system will not need to know about my module. I also want to be able to add 3rd party libraries to my application without re-writing them to do special calls to my module when they add nodes to the tree. So, the conclusion I have is that I need to automatically know when I need to format the node-masks. The osg::Group has a callback function named osg::Group::childInserted. I can make my root-node into a sub-class og osg::Group where I implement the childInserted function. This way I will get a callback each time someone add nodes to my root-node. If a module create it's own osg::Group and add children to it dymically then I would not know anything about that. I also saw Roberts reply and I agree that a dirty system on the node-tree would be very expensive. I knew that my question was a longshot. I am now thinking about other solutions to achieve my goal. The goal is to render the world with 3 slave cameras. One shall render all render-bins up to bin 9. One shall render bin 10. And the 3rd shall render bin 10 and up. So bin 10 is rendered twice. This is to fix some glitches on transparent polygons that intersect each other while needing to write to the depth-buffer. Ok, so another solution would be to force a camera to only render a certain number of bins. I have not seen any place in the code around camera or cull-settings that you can specify a camera to stay within a limited range of render-bins, so I guess this won't be as easy either? Do you have any suggestions on how to be able to render one bin twice in the same frame. The first and second render of the bin shall have a few render-states set differently. Regards, Viggo Date: Wed, 23 Jul 2008 07:58:37 +0200From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Re: [osg-users] Is it possible to know when the node-graph is 'dirty'? Hi Viggo, Don't know the details of your code, but if you are adding a sub-node then you have code already that is called for adding the sub-node and thereby you know when the scene-graph has been modified? regards, Peter http://osghelp.com On Wed, Jul 23, 2008 at 7:51 AM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, My code is currently formatting the node-masks of the whole scene-graph before we start rendering. The formatting code is only ran once.If a sub-node is added to the node-tree afterwards then I want to run my formatting code again. Is there any way to know whether or not the scene-graph has been modified? Regards,Viggo Windows Live Hotmail på mobilen. Ha alltid e-posten din tilgjengelig.___osg-users mailing [EMAIL PROTECTED]://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org-- Regards,Peter Wraae Marinowww.osghelp.com - OpenSceneGraph support site _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Is this a too dirty hack?
Hi, I am on the quest to figure out when the node-tree is dirty. I have found a way... But I do not know if this way is a too dirty hack to actually use. I have a class that inherit osg::Group. I use this as the root of my scene-graph. Each time someone add a node to the world, then the dirtyBound function will be called for all parents. This function is not virtual so I can not override it in my specialization of the osg::Group class. I can however set up a callback that is called when we calculate the bounding sphere. This callback is only executed if someone call getBound when the bounding-sphere is set to dirty. So, if the callback to calculate a bounding sphere is called on my root-node then I know that the node tree has been changed. I can thus check if the node-tree is 'dirty' by calling getBound() on my own class. If that results in a callback then it was dirty. So the question is: Is this a too dirty hack to use, or is it okish? Regards, Viggo _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Is this a too dirty hack?
I am doing it because I want to be able to set node-masks on nodes that is added to the scene-graph after I have initialized my scene. I am trying to avoid having to specialize the code where we add to the tree. I want the code module that set node-masks to be completely standalone. The hack seems to fail however, any movement update on objects in the tree will also set the bound dirty, so I am currently getting a dirty flag every frame because of Ephemeris moon's movement :-) I am not using VPB, but the idea is to somehow know when anything is added to the tree so that I can process them. Viggo From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Wed, 23 Jul 2008 07:15:48 -0400Subject: Re: [osg-users] Is this a too dirty hack? who care's if it dirty, if it works it works ;) I'm still not sure why your doing this, this way, as you have to add all your nodes to your scene, and if you add a simple addNodestoScene type function that say does somehting like addNodestoScene ( osgDb::LoadNode(myfile.flt)) or similar the you catch all the additions as you have to Load the node , this falls down a little if your using VPB From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Viggo LøvliSent: Wednesday, July 23, 2008 6:35 AMTo: OSG Mailing ListSubject: [osg-users] Is this a too dirty hack? Hi, I am on the quest to figure out when the node-tree is dirty. I have found a way... But I do not know if this way is a too dirty hack to actually use. I have a class that inherit osg::Group.I use this as the root of my scene-graph. Each time someone add a node to the world, then the dirtyBound function will be called for all parents. This function is not virtual so I can not override it in my specialization of the osg::Group class.I can however set up a callback that is called when we calculate the bounding sphere.This callback is only executed if someone call getBound when the bounding-sphere is set to dirty. So, if the callback to calculate a bounding sphere is called on my root-node then I know that the node tree has been changed.I can thus check if the node-tree is 'dirty' by calling getBound() on my own class. If that results in a callback then it was dirty. So the question is: Is this a too dirty hack to use, or is it okish? Regards,Viggo Windows Live Hotmail på mobilen. Ha alltid e-posten din tilgjengelig. _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Is this a too dirty hack?
Hi Robert, Oh, I would probably be willing to go through fire to ensure that nobody can break the system :-) I am a great fan of writing modules that is as independent of other modules as possible. This also means that I do not want to make future modules dependent on current modules either (unless they have to be). So if I can find a way for my module to automatically ensure that node-masks holds a perfect format throughout the node-tree, then I will be happy. Adding a pre-processing function will cause a few problems: - Future modules will have to set up their node-tree at module init so that my module can run the pre-processing on them once. - Future modules will have to call my module to pre-process any nodes it want to add later on. - Adding some 3rd party software will thus require interface coding as it won't be able to use OSG directly. I want to make life less painful in the future by doing some painful things now :-) Currently I am quite unable to find a good and easy way to automatically detect when nodes are added to the tree. There is one call osg::Group::childInserted(). It does not do anything: virtual void childInserted( unsigned int /*pos*/) {} That would have been a great place for me to add code to call all parents's childInserted, and thus end up with a call to that function on the root node. I could override it in the root node to do my formatting thing. Adding code there is not something everyone wants, so it would have to be a modification I have to do to OSG every time we upgrade OSG. That would however be less painful than having to ensure future modules that we create or get from somewhere else stays true to our node mask regime. I am still willing to put in a few more hours to avoid having to do that. Regards, Viggo Date: Wed, 23 Jul 2008 13:02:08 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Is this a too dirty hack? Hi Viggo, Do you really have to go through all this hassle? Can't you just catch changes to the scene graph as they are being made by your app? The only part of the OSG which which would add nodes to the scene graph is the DatabasePager, and you can catch all loads from this via a Registry::ReadFileCallback. Robert. On Wed, Jul 23, 2008 at 11:35 AM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I am on the quest to figure out when the node-tree is dirty. I have found a way... But I do not know if this way is a too dirty hack to actually use. I have a class that inherit osg::Group. I use this as the root of my scene-graph. Each time someone add a node to the world, then the dirtyBound function will be called for all parents. This function is not virtual so I can not override it in my specialization of the osg::Group class. I can however set up a callback that is called when we calculate the bounding sphere. This callback is only executed if someone call getBound when the bounding-sphere is set to dirty. So, if the callback to calculate a bounding sphere is called on my root-node then I know that the node tree has been changed. I can thus check if the node-tree is 'dirty' by calling getBound() on my own class. If that results in a callback then it was dirty. So the question is: Is this a too dirty hack to use, or is it okish? Regards, Viggo Windows Live Hotmail på mobilen. Ha alltid e-posten din tilgjengelig. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Is this a too dirty hack?
I got the mail from Robert 3 times :-) V From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Date: Wed, 23 Jul 2008 08:28:20 -0400 Subject: Re: [osg-users] Is this a too dirty hack? Has Max head room entered the OSG ;) Your email is repeating ;) ,at least for me -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Robert Osfield Sent: Wednesday, July 23, 2008 8:02 AM To: OpenSceneGraph Users Subject: Re: [osg-users] Is this a too dirty hack? Hi Viggo, Do you really have to go through all this hassle? Can't you just catch changes to the scene graph as they are being made by your app? The only part of the OSG which which would add nodes to the scene graph is the DatabasePager, and you can catch all loads from this via a Registry::ReadFileCallback. Robert. On Wed, Jul 23, 2008 at 11:35 AM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I am on the quest to figure out when the node-tree is dirty. I have found a way... But I do not know if this way is a too dirty hack to actually use. I have a class that inherit osg::Group. I use this as the root of my scene-graph. Each time someone add a node to the world, then the dirtyBound function will be called for all parents. This function is not virtual so I can not override it in my specialization of the osg::Group class. I can however set up a callback that is called when we calculate the bounding sphere. This callback is only executed if someone call getBound when the bounding-sphere is set to dirty. So, if the callback to calculate a bounding sphere is called on my root-node then I know that the node tree has been changed. I can thus check if the node-tree is 'dirty' by calling getBound() on my own class. If that results in a callback then it was dirty. So the question is: Is this a too dirty hack to use, or is it okish? Regards, Viggo Windows Live Hotmail på mobilen. Ha alltid e-posten din tilgjengelig. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph. org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Is it possible to know when the node-graph is 'dirty'?
Hi Peter, This is the problem I am trying to solve: --- Two partly transparent polygons intersect each other. Example: A tree created from two quads. The tree has many pixels that is completely opaque, and many pixels that is semi visible (transparent) and of course many invisible pixels. If you render them without writing to the depth buffer: - One will partly owerwrite the other. If you render them with writing to the depth buffer: - One will partly block the other. Solution: Render them twice. First render without writing to the depth buffer. Second render with writing to the depth buffer. When you have multiple trees then trees that are close to each other will some times overwrite or block each others, so I need to solve this too. It can be solved by rendering all trees in one pass, then all trees again in a second pass. Rendering each tree independently in two passes will only solve the problem for that tree while it can still block or overwrite other trees. So I must render all trees without writing to depth buffer and then all trees again with writing to depth buffer. This solution works greatly. The following is a new problem occurring when I am trying to render parts of the scene-grapth two times: I need to reduce the number of polygons I render twice. So, all completely opaque polygons need to be rendered only once. All transparent polygons must be rendered twice. The node tree is quite complex. It contains many different nodes which are turned on and off at runtime. I need to detect who got transparent polygons. This problem is solved by a node visitor that checks the node's stateset and sees if it contains the BLEND state turned on. I can also say that render bin 10 is the only bin that contain transparent objects. If I want to go for the group node solution then I need a group that point only to transparent objects, and another group that point to the same objects. I then need to render both in one go. Sounds doable, but would be quite messy when you shall fo example add LOD nodes that contains both transparent and opaque objects. I would need to split them apart runtime and actually rewrite the whole tree. I do not think that is doable. My solution was to mask the nodes so that I could use the cullMask on the camera to decide wether I want to render opaque objects or transparent objects. So I can have 3 cameras. First one render opaque. 2nd renders transparent without depth write. 3rd renders transparent again with depth write. The solution works like a dream this far. So what is the real problem? If someone add anything to the node tree, then I need to pre-process the nodes and set nodeMasks on them. I can as you suggest write an interface function that all of our code must use to add nodes to the tree. It would work nicely. Any future code will then be dependent on my library. Not a high price to pay. Any future 3rd party products we use will also have to use my add function. This may cost more. If the products create a sub-tree which it give back to me then I can add it through the function. If the product later on through OSG callbacks decide to add more nodes then I have a huge problem. They would not use my function and would proably have 0x as node mask which means they would be rendered one time per camera I use. So... I am struggeling to avoid this function. If I can somehow know that something is added to the tree, then I can simply re-format the tree. Any future products will therefore work automatically and no human errors can break the system. Sorry for writing such a large mail, but its hard to explain with letters :-) I hope it clearifies my problem. Cheers, Viggo Date: Wed, 23 Jul 2008 15:10:45 +0200From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Re: [osg-users] Is it possible to know when the node-graph is 'dirty'? Hi Viggo, This is only a suggestion (you may have other reasons). To avoid the human error as you called it, meaning that if someone forgets to call the reformat code... then you should have a method that 3rd party code/humans must use when adding to your scene, for example void CMySceneManagerThingy::AddToScene( osg::Node* pNode ) { // add to wherever you want the pNode to be added in the scene code here // reformat your scene as you want code here }on the other subject about rendering multitimes the same object.. you should just create a new group and add your objects to that group and the group to the scene (do this as many times as you want).. don't know why you should be messing around with the renderbins? unless you are trying somekind of render pass? regards, Peter http://osghelp.com On Wed, Jul 23, 2008 at 11:00 AM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Peter, What you say is true in most
Re: [osg-users] Is it possible to know when the node-graph is 'dirty'?
Hi Robert, That sounds interesting. I hope it can solve my problem in a much better way. I am looking at the camera class and so far assuming that I need to use the setPreDrawCallback function. Looks to me like RenderStage calls this after culling has been done. My day is ending now, will look more into this tomorrow. This is however an area I am not well known in in OSG yet, so if you have more information for me then please send a mail :-) Thanx, Viggo Date: Wed, 23 Jul 2008 15:01:50 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Is it possible to know when the node-graph is 'dirty'? Hi Viggo, I think you are on totally wrong take w.r.t trying to track changes in the scene graph, for what is effectively just a custom transparent renderbin setup, and has little to do with the scene itself. The way you should tackle it is to customize the rendering backend so that the bins you require are built for you. One one for instance would be to post process the RenderStage and its contents after the CullVisitor has filled it in. Robert. On Wed, Jul 23, 2008 at 2:52 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Peter, This is the problem I am trying to solve: --- Two partly transparent polygons intersect each other. Example: A tree created from two quads. The tree has many pixels that is completely opaque, and many pixels that is semi visible (transparent) and of course many invisible pixels. If you render them without writing to the depth buffer: - One will partly owerwrite the other. If you render them with writing to the depth buffer: - One will partly block the other. Solution: Render them twice. First render without writing to the depth buffer. Second render with writing to the depth buffer. When you have multiple trees then trees that are close to each other will some times overwrite or block each others, so I need to solve this too. It can be solved by rendering all trees in one pass, then all trees again in a second pass. Rendering each tree independently in two passes will only solve the problem for that tree while it can still block or overwrite other trees. So I must render all trees without writing to depth buffer and then all trees again with writing to depth buffer. This solution works greatly.The following is a new problem occurring when I am trying to render parts of the scene-grapth two times: I need to reduce the number of polygons I render twice. So, all completely opaque polygons need to be rendered only once. All transparent polygons must be rendered twice. The node tree is quite complex. It contains many different nodes which are turned on and off at runtime. I need to detect who got transparent polygons. This problem is solved by a node visitor that checks the node's stateset and sees if it contains the BLEND state turned on. I can also say that render bin 10 is the only bin that contain transparent objects. If I want to go for the group node solution then I need a group that point only to transparent objects, and another group that point to the same objects. I then need to render both in one go. Sounds doable, but would be quite messy when you shall fo example add LOD nodes that contains both transparent and opaque objects. I would need to split them apart runtime and actually rewrite the whole tree. I do not think that is doable. My solution was to mask the nodes so that I could use the cullMask on the camera to decide wether I want to render opaque objects or transparent objects. So I can have 3 cameras. First one render opaque. 2nd renders transparent without depth write. 3rd renders transparent again with depth write. The solution works like a dream this far.So what is the real problem? If someone add anything to the node tree, then I need to pre-process the nodes and set nodeMasks on them. I can as you suggest write an interface function that all of our code must use to add nodes to the tree. It would work nicely. Any future code will then be dependent on my library. Not a high price to pay. Any future 3rd party products we use will also have to use my add function. This may cost more. If the products create a sub-tree which it give back to me then I can add it through the function. If the product later on through OSG callbacks decide to add more nodes then I have a huge problem. They would not use my function and would proably have 0x as node mask which means they would be rendered one time per camera I use. So... I am struggeling to avoid this function. If I can somehow know that something is added to the tree, then I can simply re-format the tree. Any future products
[osg-users] Is it possible to know when the node-graph is 'dirty'?
Hi, My code is currently formatting the node-masks of the whole scene-graph before we start rendering. The formatting code is only ran once. If a sub-node is added to the node-tree afterwards then I want to run my formatting code again. Is there any way to know whether or not the scene-graph has been modified? Regards, Viggo _ Windows Live Messenger - også på mobilen. http://windowslivemobile.msn.com/Homepage.aspx?lang=nb-noocid=30032___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Avoiding the intersection visitor
Hi, I want to save some CPU-cycles by ignoring collision-detection on some of my scene nodes. Is the following code the best way to do it? void MyGroup::traverse( osg::NodeVisitor nv) { switch( nv.getVisitorType() ) { case osg::NodeVisitor::NODE_VISITOR: if( dynamic_castosgUtil::IntersectionVisitor*(nv) ) { return; // Prohibit visitor from visiting my children } ... other traverse code ... } osg::Group::traverse( nv ); } The class MyGroup (public of osg::Group) is the parent of all nodes that I do not want intersection visitors to see. Viggo _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Avoiding the intersection visitor
Hi Mathias, Thanx, your way is much better. :-) Cheers, Viggo From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Date: Mon, 21 Jul 2008 10:43:24 +0200 Subject: Re: [osg-users] Avoiding the intersection visitor Hi, On Monday 21 July 2008 10:37, Viggo Løvli wrote: I want to save some CPU-cycles by ignoring collision-detection on some of my scene nodes. Is the following code the best way to do it? I use node masks to classify which modes are visible to intersection and which not. GReetings Mathias -- Dr. Mathias Fröhlich, science + computing ag, Software Solutions Hagellocher Weg 71-75, D-72070 Tuebingen, Germany Phone: +49 7071 9457-268, Fax: +49 7071 9457-511 -- Vorstand/Board of Management: Dr. Bernd Finkbeiner, Dr. Florian Geyer, Dr. Roland Niemeier, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats/ Chairman of the Supervisory Board: Prof. Dr. Hanns Ruder Sitz/Registered Office: Tuebingen Registergericht/Registration Court: Stuttgart Registernummer/Commercial Register No.: HRB 382196 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Hold kontakten med Windows Live Messenger. http://clk.atdmt.com/GBL/go/msnnkdre001003gbl/direct/01/___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] I get errors when trying to render to a luminance buffer...
Hi again :-) I abandoned the idea of using a floating point buffer. I went into the thinking box and came to the same conclusion as you wrote about in the end of your comment. I needed a quite high resolution so I decided to use all 4 channels (RGBA). The numbers I want to accumulate is seldom large, so I needed most resolution in the lower scale. I thus decided to use RGBA where R is most significant and A is least significant. Bit usage is: R = 6 bits G = 5 bits B = 4 bits A = 3 bits All non used bits will overlap with next channel. I am using this to generate a pixel weight when rendering many volumetric clouds on top of each other on screen. Naturally most overlapping clouds will be further away from camera, so I needed as high a number of overlap bits on the lest significant buffers that I could get. My usage accepts in worst case 32 overlapping pixels with maximum weight (in the distance). I think that is something I can live with :-) Anyhow, I got a range (18 bit) that was good enough for my usage and it gives a decent cloud sorting for clouds seen up to 40 kilometers away. I must stress one point: - If you ever try using the alpha channel for this then remember to turn off alpha-clipping (set alpha-func to always). Viggo Date: Mon, 2 Jun 2008 10:05:24 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi, Viggo Løvli wrote: Ok, that make sense. It happens every time I render with a floating point texture.The fall in framerate does not happen when I run the OSG multi render target example using HDR. My code does however use additive blending toward the floating point texture so I bet that is what cause it to fall into a software rendering mode. We have also experienced this slowdown when blending was enabled with float textures. Testing on newer cards is pending as we cannot figure out from docs on the internet if this is actually supported in current hardware at all. Seems like DX10.1 mandates float blending, but we will test to make sure. Please let me know if you know if it is supported in hardware. Do you know about any texture surface bit format that is more than 8 bits (unsigned integer) ? I don't know of any. I've only seen a paper once where people were using three channels to simulate one large channel. The RGB channels were partially overlapped to create a higher dynamic range channel. jp Viggo Date: Fri, 30 May 2008 16:29:05 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi Viggo, When performance drops like this it's because you've dropped onto a software fallback path in the OpenGL driver. Exactly what formats are software vs hardware depends upon the hardware and OpenGL drivers. You'll need to check with your hardware vendors specs to see what will be hardware accelerated. Robert. On Fri, May 30, 2008 at 2:16 PM, Viggo Løvli [EMAIL PROTECTED] wrote:Hi Robert, I modified my code as you suggested.The warning is gone :-) The framerate is now 10 seconds per frame instead of 30 frames per second.It does something.The texture I render to remains black (cleared to black). If I change the setInternalFormat to GL_RGBA then the framerate is up again,and the texture gets colors. This works, but then I only have 8 bit in thered channel. What I need is as many bits as possible in the red channel, preferably 32. And I do not need GBA channels.Do you have a suggestion for me on this one? Viggo Date: Fri, 30 May 2008 13:25:24 +0100From: [EMAIL PROTECTED]To: osg-users@lists.openscenegraph.orgSubject: Re: [osg-users] I get errors when trying to render to a luminancebuffer... Hi Viggo, The warning is exactly right, pbuffers don't suport multiple rendertargets, only FrameBufferObjects do. Perhaps what you intend it not to use multiple render targets, in which case you should set the Camera attachment to COLOR_BUFFER rather than COLOR_BUFFER0, that later tells the OSG that you want MRT and will be using glFragColor[] in your shaders. Also the Camera::setDrawBuffer(GL_COLOR_ATTACHMENT0_EXT) isinappropriate for pbuffers. Robert. On Fri, May 30, 2008 at 1:18 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I want to render to a floating point buffer, and I set things up like this: tex-setInternalFormat( GL_LUMINANCE16F_ARB ); tex-setSourceFormat( GL_RED ); tex-setSourceType( GL_FLOAT ); camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT ); camera-attach( osg::Camera::BufferComponent
Re: [osg-users] I get errors when trying to render to a luminance buffer...
] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi, Viggo Løvli wrote: Ok, that make sense. It happens every time I render with a floating point texture. The fall in framerate does not happen when I run the OSG multi render target example using HDR. My code does however use additive blending toward the floating point texture so I bet that is what cause it to fall into a software rendering mode. We have also experienced this slowdown when blending was enabled with float textures. Testing on newer cards is pending as we cannot figure out from docs on the internet if this is actually supported in current hardware at all. Seems like DX10.1 mandates float blending, but we will test to make sure. Please let me know if you know if it is supported in hardware. Do you know about any texture surface bit format that is more than 8 bits (unsigned integer) ? I don't know of any. I've only seen a paper once where people were using three channels to simulate one large channel. The RGB channels were partially overlapped to create a higher dynamic range channel. jp Viggo Date: Fri, 30 May 2008 16:29:05 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi Viggo, When performance drops like this it's because you've dropped onto a software fallback path in the OpenGL driver. Exactly what formats are software vs hardware depends upon the hardware and OpenGL drivers. You'll need to check with your hardware vendors specs to see what will be hardware accelerated. Robert. On Fri, May 30, 2008 at 2:16 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Robert, I modified my code as you suggested. The warning is gone :-) The framerate is now 10 seconds per frame instead of 30 frames per second. It does something. The texture I render to remains black (cleared to black). If I change the setInternalFormat to GL_RGBA then the framerate is up again, and the texture gets colors. This works, but then I only have 8 bit in the red channel. What I need is as many bits as possible in the red channel, preferably 32. And I do not need GBA channels. Do you have a suggestion for me on this one? Viggo Date: Fri, 30 May 2008 13:25:24 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi Viggo, The warning is exactly right, pbuffers don't suport multiple render targets, only FrameBufferObjects do. Perhaps what you intend it not to use multiple render targets, in which case you should set the Camera attachment to COLOR_BUFFER rather than COLOR_BUFFER0, that later tells the OSG that you want MRT and will be using glFragColor[] in your shaders. Also the Camera::setDrawBuffer(GL_COLOR_ATTACHMENT0_EXT) is inappropriate for pbuffers. Robert. On Fri, May 30, 2008 at 1:18 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I want to render to a floating point buffer, and I set things up like this: tex-setInternalFormat( GL_LUMINANCE16F_ARB ); tex-setSourceFormat( GL_RED ); tex-setSourceType( GL_FLOAT ); camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT ); camera-attach( osg::Camera::BufferComponent( osg::Camera::COLOR_BUFFER0 ), tex ); camera-setDrawBuffer( GL_COLOR_ATTACHMENT0_EXT ); My fragment-shader that write to the surface output the value this way: gl_FragData[0].r = 1.0; Another fragment-shader reads the surface this way: value = texture2DRect( id, gl_FragCoord.xy ).r; I get the following output when I try to run my app: Warning: RenderStage::runCameraSetUp(state) Pbuffer does not support multiple color outputs. My app runs, but nothing is written to the texture. Is it possible to set up a surface that holds one channel (GL_RED) which is an unsigned int of 32 bit resolution? I'd rather use that than a float :-) Viggo Få Hotmail du også. Windows Live Hotmail nå med 5000 MB gratis lagringsplass. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg
[osg-users] I get errors when trying to render to a luminance buffer...
Hi, I want to render to a floating point buffer, and I set things up like this: tex-setInternalFormat( GL_LUMINANCE16F_ARB ); tex-setSourceFormat( GL_RED ); tex-setSourceType( GL_FLOAT ); camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT ); camera-attach( osg::Camera::BufferComponent( osg::Camera::COLOR_BUFFER0 ), tex ); camera-setDrawBuffer( GL_COLOR_ATTACHMENT0_EXT ); My fragment-shader that write to the surface output the value this way: gl_FragData[0].r = 1.0; Another fragment-shader reads the surface this way: value = texture2DRect( id, gl_FragCoord.xy ).r; I get the following output when I try to run my app: Warning: RenderStage::runCameraSetUp(state) Pbuffer does not support multiple color outputs. My app runs, but nothing is written to the texture. Is it possible to set up a surface that holds one channel (GL_RED) which is an unsigned int of 32 bit resolution? I'd rather use that than a float :-) Viggo _ Lei av å vente på svar? Si det direkte med Windows Live Messenger. http://get.live.com/nb-no/messenger/overview___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] I get errors when trying to render to a luminance buffer...
Ok, that make sense. It happens every time I render with a floating point texture. The fall in framerate does not happen when I run the OSG multi render target example using HDR. My code does however use additive blending toward the floating point texture so I bet that is what cause it to fall into a software rendering mode. Do you know about any texture surface bit format that is more than 8 bits (unsigned integer) ? Viggo Date: Fri, 30 May 2008 16:29:05 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi Viggo, When performance drops like this it's because you've dropped onto a software fallback path in the OpenGL driver. Exactly what formats are software vs hardware depends upon the hardware and OpenGL drivers. You'll need to check with your hardware vendors specs to see what will be hardware accelerated. Robert. On Fri, May 30, 2008 at 2:16 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Robert, I modified my code as you suggested. The warning is gone :-) The framerate is now 10 seconds per frame instead of 30 frames per second. It does something. The texture I render to remains black (cleared to black). If I change the setInternalFormat to GL_RGBA then the framerate is up again, and the texture gets colors. This works, but then I only have 8 bit in the red channel. What I need is as many bits as possible in the red channel, preferably 32. And I do not need GBA channels. Do you have a suggestion for me on this one? Viggo Date: Fri, 30 May 2008 13:25:24 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] I get errors when trying to render to a luminance buffer... Hi Viggo, The warning is exactly right, pbuffers don't suport multiple render targets, only FrameBufferObjects do. Perhaps what you intend it not to use multiple render targets, in which case you should set the Camera attachment to COLOR_BUFFER rather than COLOR_BUFFER0, that later tells the OSG that you want MRT and will be using glFragColor[] in your shaders. Also the Camera::setDrawBuffer(GL_COLOR_ATTACHMENT0_EXT) is inappropriate for pbuffers. Robert. On Fri, May 30, 2008 at 1:18 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi, I want to render to a floating point buffer, and I set things up like this: tex-setInternalFormat( GL_LUMINANCE16F_ARB ); tex-setSourceFormat( GL_RED ); tex-setSourceType( GL_FLOAT ); camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT ); camera-attach( osg::Camera::BufferComponent( osg::Camera::COLOR_BUFFER0 ), tex ); camera-setDrawBuffer( GL_COLOR_ATTACHMENT0_EXT ); My fragment-shader that write to the surface output the value this way: gl_FragData[0].r = 1.0; Another fragment-shader reads the surface this way: value = texture2DRect( id, gl_FragCoord.xy ).r; I get the following output when I try to run my app: Warning: RenderStage::runCameraSetUp(state) Pbuffer does not support multiple color outputs. My app runs, but nothing is written to the texture. Is it possible to set up a surface that holds one channel (GL_RED) which is an unsigned int of 32 bit resolution? I'd rather use that than a float :-) Viggo Få Hotmail du også. Windows Live Hotmail nå med 5000 MB gratis lagringsplass. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org SkyDrive er her. Glem minnepinnen! ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Windows Live Hotmail nå med 5000 MB gratis lagringsplass. http://get.live.com/nb-no/mail/overview___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Rendering the scene multiple times with differentshaders.
Hi Robert, I've wondering about making it possible to have a composite StateSe to provide switch and layer functionality, but have never tackled this as it complicates the basic API quite a bit for functionality that only a very small proportion of users will ever require. That being said, I do see value in having the OSG better support the type of usage you are putting it through, we just have to strive for a solution that is not intrusive to rest of the OSG. Robert. Maybe I can take a look at how to integrate this into OSG? Well, if I get time to actually do it :-) But in case I do, then I would like to try a direct integration in the OSG code-base. I will find a way to keep the old API so that any code written earlier will still work. Performance should not be affected much. I think the controller for this should be the camera, at least that is what I need, but I'd like to look closer at the code before I make up my mind. I am quite new to OSG, so I need to know where and how to submit a code contribution candidate so that it can be reviewed and hopefully approved. Who should I talk to about it? I am impressed by how quickly I got feedback on this mail thread :-) Viggo ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _ Del personlige filer, bilder og videoer med Windows Live Messenger. http://get.live.com/nb-no/messenger/overview___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Rendering the scene multiple times with differentshaders.
Hi Robert, Thanx for the info about that one. We will most certainly need multi-threaded rendering later on. I am currently looking into the NodeMask solution. What about adding an array of states or shaders to a node? This would make it possible to hold multiple shaders in one node and index them differently depending on what camera you use. Is that a good idea for the future? Viggo Date: Fri, 23 May 2008 09:16:35 +0100 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Rendering the scene multiple times with differentshaders. Hi Viggo, You should be able to set things up using NodeMasks, as you could set the push/pop traversal mask via a cull callback that decorates each branch of your multipass. Using osg::Switch would require to modify the switch during the cull traversal which is something you should avoid if you ever want to multi-thread multi-context rendering. Robert. On Thu, May 22, 2008 at 9:11 PM, Viggo Løvli [EMAIL PROTECTED] wrote: Hi Eric, It looks to me like using the switch could be the solution for my needs. I think that it would either have to be a switch, or a mask. An example of what I want to do could be written like this: Original part of my scene graph: Root | +--Node_1 (Specifies a states and a fragment shader) | | | +--Node_2 (Specifies a geometry to be rendered) | | | +--Node_3 (Specifies another geometry to be rendered) Using the switch to make sure I can render the scene twice with different shader: Root | +--Switch | | | +--Node_1 (Specifies states and a fragment shader) | | | | | +--Node_2 (Specifies a geometry to be rendered) | | | | | +--Node_3 (Specifies another geometry to be rendered) | | | +--Node_4 (This is a clone of Node_1 where I have added a new shader) | | | | | +--Node_2 (This is the same node as Node_2 above) | | | | | +--Node_3 (This is the same node as Node_3 above) | | Conclusion: - The switch will let one scene render use Node_1 and another scene render use Node_4. Using a mask to achieve the same effect: Root | +--Node_1 (Specifies states and a fragment shader, MASK = 0x1) | | | +--Node_2 (Specifies a geometry to be rendered) | | | +--Node_3 (Specifies another geometry to be rendered) | +--Node_4 (This is a clone of Node_1 where I have added a new shader, MASK = 0x2) | | | +--Node_2 (This is the same node as Node_2 above) | | | +--Node_3 (This is the same node as Node_3 above) | Conclusion: - This should work as far as I have understood the mask system, and this time I do not need the switch node. - In this solution I would not need to alter the position of Node_1 in the tree. It is essential for me to have a solution where I do not need to do any work in between each time I render the scene. Viggo Date: Thu, 22 May 2008 17:14:09 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Rendering the scene multiple times with differentshaders. I think use osg::Switch is right way to do. You can set a StateSet to any Node, then (correct me if I'm wrong) all children will inherit the stateset. So, I think about a graph like this: Root | …. | Switch ___|__ | | | Node2-stateset(shader2) Node1-stateset(shader1) | |_| | | Rest of your graph …. So you can insert this kind of switch to any subgraph which use another shader, you do not have to duplicates the rest of your subgraph since two parents can have the same child. Another way (similar) is to use a simple group instead of the osg::Switch, but let osg to choose the good branch via nodeMask. In this way, you do not have to update your graph between to passes. I hope it helps you. Eric Z. De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Viggo Løvli Envoyé : jeudi 22 mai 2008 15:57 À : osg-users@lists.openscenegraph.org Objet : [osg-users] Rendering the scene multiple times with differentshaders. Hi, My scene contains models that use one or many textures, some use vertex and fragment shaders of different kinds. Complex but good looking :-) I need to render the scene many times each frame. The first render is a normal one, so I need no change in textures, states or shaders. The second render is a multi target render. Here I need to change all the fragment shaders that my scene uses. Each shader need to output data to each render target. My question is: - What is the best way to switch shaders on many models in my scene-graph? One way to do it is to put a shader on the top node and override all child nodes, but that only works if the entire scene-graph renders with the same shader. My scene uses
[osg-users] Rendering the scene multiple times with different shaders.
Hi, My scene contains models that use one or many textures, some use vertex and fragment shaders of different kinds. Complex but good looking :-) I need to render the scene many times each frame. The first render is a normal one, so I need no change in textures, states or shaders. The second render is a multi target render. Here I need to change all the fragment shaders that my scene uses. Each shader need to output data to each render target. My question is: - What is the best way to switch shaders on many models in my scene-graph? One way to do it is to put a shader on the top node and override all child nodes, but that only works if the entire scene-graph renders with the same shader. My scene uses different shaders, so overriding is not the solution. One idea I have is to parse through all scene graphs that I add to my world and duplicate all nodes that sets a fragment shader. I could then create one shader for the normal render and one for the multi target render. This would require that I mask those nodes so that the correct ones are used in each render. This will require me to write my own function for adding nodes to the scene. Anyhow, I would be very happy if someone could guide me in the right direction on this one. Regards, Viggo _ Snakk gratis med tekst, telefoni og video. Få Windows Live Messenger du også. http://get.live.com/nb-no/messenger/overview___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Rendering the scene multiple times with differentshaders.
Hi Eric, It looks to me like using the switch could be the solution for my needs. I think that it would either have to be a switch, or a mask. An example of what I want to do could be written like this: Original part of my scene graph: Root | +--Node_1 (Specifies a states and a fragment shader) || |+--Node_2 (Specifies a geometry to be rendered) || |+--Node_3 (Specifies another geometry to be rendered) Using the switch to make sure I can render the scene twice with different shader: Root | +--Switch || |+--Node_1 (Specifies states and a fragment shader) ||| ||+--Node_2 (Specifies a geometry to be rendered) ||| ||+--Node_3 (Specifies another geometry to be rendered) || |+--Node_4 (This is a clone of Node_1 where I have added a new shader) ||| ||+--Node_2 (This is the same node as Node_2 above) ||| ||+--Node_3 (This is the same node as Node_3 above) || Conclusion: - The switch will let one scene render use Node_1 and another scene render use Node_4. Using a mask to achieve the same effect: Root | +--Node_1 (Specifies states and a fragment shader, MASK = 0x1) || |+--Node_2 (Specifies a geometry to be rendered) || |+--Node_3 (Specifies another geometry to be rendered) | +--Node_4 (This is a clone of Node_1 where I have added a new shader, MASK = 0x2) || |+--Node_2 (This is the same node as Node_2 above) || |+--Node_3 (This is the same node as Node_3 above) | Conclusion: - This should work as far as I have understood the mask system, and this time I do not need the switch node. - In this solution I would not need to alter the position of Node_1 in the tree. It is essential for me to have a solution where I do not need to do any work in between each time I render the scene. Viggo Date: Thu, 22 May 2008 17:14:09 +0200 From: [EMAIL PROTECTED] To: osg-users@lists.openscenegraph.org Subject: Re: [osg-users] Rendering the scene multiple times with differentshaders. I think use osg::Switch is right way to do. You can set a StateSet to any Node, then (correct me if I’m wrong) all children will inherit the stateset. So, I think about a graph like this: Root | …. | Switch ___|__ | | | Node2-stateset(shader2) Node1-stateset(shader1) | |_| | | Rest of your graph …. So you can insert this kind of switch to any subgraph which use another shader, you do not have to duplicates the rest of your subgraph since two parents can have the same child. Another way (similar) is to use a simple group instead of the osg::Switch, but let osg to choose the good branch via nodeMask. In this way, you do not have to update your graph between to passes. I hope it helps you. Eric Z. De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Viggo Løvli Envoyé : jeudi 22 mai 2008 15:57 À : osg-users@lists.openscenegraph.org Objet : [osg-users] Rendering the scene multiple times with differentshaders. Hi, My scene contains models that use one or many textures, some use vertex and fragment shaders of different kinds. Complex but good looking :-) I need to render the scene many times each frame. The first render is a normal one, so I need no change in textures, states or shaders. The second render is a multi target render. Here I need to change all the fragment shaders that my scene uses. Each shader need to output data to each render target. My question is: - What is the best way to switch shaders on many models in my scene-graph? One way to do it is to put a shader on the top node and override all child nodes, but that only works if the entire scene-graph renders with the same shader. My scene uses different shaders, so overriding is not the solution. One idea I have is to parse through all scene graphs that I add to my world and duplicate all nodes that sets a fragment shader. I could then create one shader for the normal render and one for the multi target render. This would require that I mask those nodes so that the correct ones are used in each render. This will require me to write my own function for adding nodes to the scene. Anyhow, I would be very happy if someone could guide me in the right direction on this one. Regards, Viggo Si det direkte med Windows Live Messenger. Lei av å vente på svar