Hi Michael, On 10/10/2011 10:02 AM, Michael Raab wrote: > Hi again, > > we have some new NVIDIA Geforce graphic cards (gtx580, gtx590) which do not > accelerate all of our applications. Some application are more than 10 times > faster than before but other apps are much slower. What I've investigated so > far is that the apps that run slower have very much geometry distributed over > a few nodes in the scenegraph. E.g.> 10 mill. tris with 20-30 nodes... > > Running gDebugger I found out that approx. 98% of the OpenGL calls that are > generated by OpenSG1.8 are deprecated with OpenGL 3.1. May deprecated > function calls cause these performance problems with new graphic boards? Does > someone else had similar problems?
The main reason you see that is that OpenSG 1.8 doesn't use VBOs yet. It uses VertexArrays, if the data is single-indexed, or immediate mode otherwise. Both of those paths are not supported in OpenGL 3 any more, so you see the deprecated warnings. However, by default OpenSG puts the geometrey into a display list, which at the last tests I did (admittedly some time ago) were still the fastest way of pushing geometry through the system. So if you turned that off I could imagine seeing problems, or if the internal display list handler in the driver does not handle very large dlists well any more. ~500,000 triangles per node doesn't sound excessive though, do I'm not sure why that would be a problem. Are you using display lists? Hope it helps Dirk ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 _______________________________________________ Opensg-users mailing list Opensg-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/opensg-users