Re: [osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-30 Thread Stefan Eilemann

On 29. Jun 2012, at 21:03, WangWentao wrote:

 how about 2 same GPUs?

First one is chosen.


Stefan.

-- 
http://www.eyescale.ch
http://www.equalizergraphics.com
http://www.linkedin.com/in/eilemann



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-29 Thread Stefan Eilemann

On 26. Jun 2012, at 8:57, GeeKer Wang wrote:

 In fact, Windows NVidia driver will try to send OpenGL cammands to all GPU 
 when SLI model disabled.

NVidia drivers =256.0 send the commands to the most powerful GPU and then blit 
the result to the display GPU. You can override the GPU on a per-application 
base in the control panel, afaik. On Linux this is a non-issue, as you address 
the GPUs through X screens.


Cheers,

Stefan.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-29 Thread WangWentao
how about 2 same GPUs?



在 2012-6-29,21:18,Stefan Eilemann eilem...@gmail.com 写道:

 
 On 26. Jun 2012, at 8:57, GeeKer Wang wrote:
 
 In fact, Windows NVidia driver will try to send OpenGL cammands to all GPU 
 when SLI model disabled.
 
 NVidia drivers =256.0 send the commands to the most powerful GPU and then 
 blit the result to the display GPU. You can override the GPU on a 
 per-application base in the control panel, afaik. On Linux this is a 
 non-issue, as you address the GPUs through X screens.
 
 
 Cheers,
 
 Stefan.
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-26 Thread GeeKer Wang
Hi, Jason

It's a GPU issue.
I just want to verify the scalability of multi-GPU in windows, and the
answer is disappointing.
Few NVIDIA GPU cards support GPU-affinity in Windows, and SLI mode is the
only way to use multi-cards.
In fact, Windows NVidia driver will try to send OpenGL cammands to all GPU
when SLI model disabled.

In linux, the default behavior is each GPU is bind to a screen.

On Mon, Jun 25, 2012 at 10:18 PM, Jason Daly jd...@ist.ucf.edu wrote:

 On 06/25/2012 12:59 AM, GeeKer Wang wrote:

 Hi, all

 I want to use two GTS 250 cards to do the parallel rendering job based on
 OpenSceneGraph。

 In my experiment, a scene full of complex models is split to 2 parts
 according to viewport。
 The 2 parts are rendered in seperated windows (actually two slaves).
 I hope the LEFT part is rendered in the first screen and by the first
 GPU, and the RIGHT part in the second screen and by the second GPU。
 The results are a little wierd and disappointing.




 Hi, Bob,

 The additional card will only help if your application is GPU-bound (that
 is, waiting on the GPU to finish rendering a frame before the next can
 start).  Another possibility is that it is CPU bound (the CPU is taking
 most of the time and the GPU is waiting on it), and another is that it is
 interconnect bound (the bus between the CPU or RAM and the GPU is the
 bottleneck).

 Just adding another card will not help if you're CPU-bound or interconnect
 bound, and it can actually make things worse if your scene isn't set up in
 an optimal way.  You might end up trying to cram twice the amount of data
 down the bus in order to try and keep both GPUs fed, and this will kill
 performance even more than with a single card.

 One quick fix you can try is to run the osgUtil::Optimizer on your scene
 with the option VERTEX_PRETRANSFORM | INDEX_MESH | VERTEX_POSTTRANSFORM.
  This will optimize the vertex data for the vertex cache common on modern
 GPUs, and may help with bus bandwidth.  The other key is to be sure that
 your scene is organized and subdivided spatially, so that when you're
 viewing the scene, you don't end up duplicating draw calls across the GPUs.

 Hope this helps,

 --J


 __**_
 osg-users mailing list
 osg-users@lists.**openscenegraph.org osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.**org/listinfo.cgi/osg-users-**
 openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




-- 
Bob
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-25 Thread Jason Daly

On 06/25/2012 12:59 AM, GeeKer Wang wrote:

Hi, all

I want to use two GTS 250 cards to do the parallel rendering job based 
on OpenSceneGraph。


In my experiment, a scene full of complex models is split to 2 parts 
according to viewport。

The 2 parts are rendered in seperated windows (actually two slaves).
I hope the LEFT part is rendered in the first screen and by the first 
GPU, and the RIGHT part in the second screen and by the second GPU。

The results are a little wierd and disappointing.




Hi, Bob,

The additional card will only help if your application is GPU-bound 
(that is, waiting on the GPU to finish rendering a frame before the next 
can start).  Another possibility is that it is CPU bound (the CPU is 
taking most of the time and the GPU is waiting on it), and another is 
that it is interconnect bound (the bus between the CPU or RAM and the 
GPU is the bottleneck).


Just adding another card will not help if you're CPU-bound or 
interconnect bound, and it can actually make things worse if your scene 
isn't set up in an optimal way.  You might end up trying to cram twice 
the amount of data down the bus in order to try and keep both GPUs fed, 
and this will kill performance even more than with a single card.


One quick fix you can try is to run the osgUtil::Optimizer on your scene 
with the option VERTEX_PRETRANSFORM | INDEX_MESH | 
VERTEX_POSTTRANSFORM.  This will optimize the vertex data for the 
vertex cache common on modern GPUs, and may help with bus bandwidth.  
The other key is to be sure that your scene is organized and subdivided 
spatially, so that when you're viewing the scene, you don't end up 
duplicating draw calls across the GPUs.


Hope this helps,

--J


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Parallel Rendering Problems using NVidia GPU cards in Windows

2012-06-24 Thread GeeKer Wang
Hi, all

I want to use two GTS 250 cards to do the parallel rendering job based on
OpenSceneGraph。

In my experiment, a scene full of complex models is split to 2 parts
according to viewport。
The 2 parts are rendered in seperated windows (actually two slaves).
I hope the LEFT part is rendered in the first screen and by the first GPU,
and the RIGHT part in the second screen and by the second GPU。
The results are a little wierd and disappointing.

Experiment condition:
GigaByte Motherboard EX58-UD3R, 2 GTS 250 Cards, Windows XP, driver 285.58
gpuz are used to monitor the GPUs.
There are 3 SLI options: Maximize 3D performance, Activate all displays,
Disable SLI

CASE 1: 1 screen and 1 GPU, 1 window
Frame Rate: 8.4
GPU Load: 100%

CASE 2: 2 screens and 2 GPU, 2 windows, the scene are split
Frame Rate: 3.5
GPU1 Load: 60%
GPU2 Load: 60%

CASE 3: 2 screens and 2 GPU, 1 window in 1st screen
Frame Rate: 8.4, 14 if only half of the scene is rendered
GPU1 Load: 100%
GPU2 Load: 100%

CASE 1,2,3 are the same for Activate all displays and Disable SLI mode。
When I use Maximize 3D performance mode, and use the SLI connector to
connect the 2 GPU cards, there is only 1 screen left。
But according to the gpuz, only 1 GPU is working。

CASE 4: 1 screen and 2 GPUs, 1 window
Frame Rate: 8.4
GPU1 Load: 100%
GPU2 Load: 0%

CASE 5: 1 screen and 2 GPUs, 2 window
Frame Rate: 3.5
GPU1 Load: 60%
GPU2 Load: 0%


My assumption that 2 GPUs can work independently seems not right.
And the SLI is also not working。
It looks like there is always only 1 GPU working, even the other GPU is
full of load。

So my problem is:
1. In windows, how can I let the 2 GPUs work independently and bind the
screen with the GPU.
2. Why the SLI mode seems failed? Are there some special configurations?

Any advice will be appreciated!

-- 
Bob
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org