Re: [osg-users] Parallel draw on a multi-core, single GPU architecture

2010-08-18 Thread Robert Osfield
Hi Tugkan, It only ever makes sense to parallelize draw dispatch when you have multiple graphics cards and one context per graphics card. Trying to add more threads than OpenGL/graphics card can handle will just stall things and result in lower performance. If your draw dispatch is overloaded

Re: [osg-users] Parallel draw on a multi-core, single GPU architecture

2010-08-18 Thread Tugkan Calapoglu
Hi Robert, thanks for the answer. Our scene graph is heavily optimized and there doesn't seem to be any serious optimization possibilities. Maybe a few percent improvement from here and there. We observe that, as our databases grow, the performance requirement for draw thread grows faster than

Re: [osg-users] Parallel draw on a multi-core, single GPU architecture

2010-08-18 Thread Robert Osfield
Hi Tugkan, On Wed, Aug 18, 2010 at 9:49 AM, Tugkan Calapoglu tug...@vires.com wrote: We observe that, as our databases grow, the performance requirement for draw thread grows faster than cull. In the long run, draw will be a serious limiting factor for us. So I wanted to see if we can take

Re: [osg-users] Parallel draw on a multi-core, single GPU architecture

2010-08-18 Thread Tugkan Calapoglu
Hi Robert, our databases are for driving simulation and they are usually city streets. We have been employing batching and occlusion culling with very good results so far. Extensive use of LOD's is problematic because 1- They actually reduce performance in most of the cases 2- The switching

Re: [osg-users] Parallel draw on a multi-core, single GPU architecture

2010-08-18 Thread Robert Osfield
Hi Tugkan, On Wed, Aug 18, 2010 at 10:39 AM, Tugkan Calapoglu tug...@vires.com wrote: Actually, draw always required more time than cull. If you look OSG examples, draw usually takes more time than cull. Now that draw reaches 16 ms border, this started to hurt us. It is typical for cull to be