OK, I can see where things are going majorly wrong here. Let's start with the 
worst of the problems.

I notice on the bottom of your screenshot that your desktop has 4 windows named 
ParaView Server #0, ParaView Server #1, etc. Those are X windows that the 
server is opening up on your desktop. You really don't want the server to do 
that. Those windows are used for OpenGL rendering. If they are opened on your 
desktop, that means that all four of those processes on your server are sending 
all the geometry to your desktop, your desktop renders all the geometry, and 
then the images get shipped to the server. The server then composites those 
images together and sends the result back to your desktop.

I'm sure that when you are running the server, your DISPLAY environment 
variable is pointing back to your desktop, which is causing the problem. You 
need to make sure the server is run with display set to localhost:0. More 
information is on the ParaView wiki at:

http://www.paraview.org/Wiki/Setting_up_a_ParaView_Server#X_Connections

That said, I'm not sure using your server is going to give you a big rendering 
performance boost over your desktop. The parallel rendering is really designed 
for large clusters with many GPUs. The rendering should work OK on your desktop 
as long as you're not thrashing your virtual memory (which is possible).

-Ken

From: Jérémy Santina <[email protected]<mailto:[email protected]>>
Date: Wednesday, July 2, 2014 4:17 AM
To: Kenneth Moreland <[email protected]<mailto:[email protected]>>
Cc: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: [EXTERNAL] Re: [Paraview] Rendering in parallel

Sorry for my poor description. I will try to give more information.

I am loading a Multi-block Dataset without applying any filters and the 
rendering is surface rendering. In order to understand how it works, I am just 
running a pvserver in parallel on another computer (with a better GPU) 
connected via SSH. The graphics card is an NVIDIA Quadro FX 4600 and you have 
to know that I am not alone using this machine.  Server and client both work on 
Linux. So would the problem be because there is only one GPU ?

I join a picture with this message.

I would have another question. When I launch the rendering in parallel, a 
variable called vtkProcessId is generated. What is it ? Does it do the same 
thing if I apply Process Id Scalars filter ? Or are they two different things ?

Jérémy


2014-07-01 18:08 GMT+02:00 Moreland, Kenneth 
<[email protected]<mailto:[email protected]>>:
To check the distribution of the data, use the Process Id Scalars filters. That 
should color the data based on which processor it is located.

It might help if you described your system more completely. What kind of data 
are you loading? Is it image data? Polygon data? AMR? An unstructured grid? Are 
you applying any filters? How are you rendering it? Is it surface or volume 
rendering? Is there any transparency? Can you send a picture? What kind of 
parallel computer are you using? Are you running ParaView on your desktop in 
multi-core mode (I think rendering actually serializes in that case because you 
still have only one GPU.), or are you connecting to a cluster? How many nodes 
on your cluster and how are they configured?

-Ken

From: Jérémy Santina <[email protected]<mailto:[email protected]>>
Date: Tuesday, July 1, 2014 2:31 AM
To: Kenneth Moreland <[email protected]<mailto:[email protected]>>
Cc: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: [EXTERNAL] Re: [Paraview] Rendering in parallel

Actually, I did try the D3 filter but I didn't really see any better results. 
Maybe it is because I don't know how to configure it. How does D3 filter work ?



2014-06-30 16:21 GMT+02:00 Moreland, Kenneth 
<[email protected]<mailto:[email protected]>>:
Jeremy,

Like the other parallel processing in ParaView, the efficiency is dictated by 
the distribution of the data. If your data distribution is highly imbalanced 
such as when all the data is on one process as in your case, then all the 
processing will happen where the data is and the rest of the processors will 
remain idle.

You could try running the D3 filter. That should redistribute the point data 
more evenly.

-Ken

From: Jérémy Santina <[email protected]<mailto:[email protected]>>
Date: Monday, June 30, 2014 2:55 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: [EXTERNAL] [Paraview] Rendering in parallel

Good morning,

I am a novice user of Paraview and there are some aspects which I am not 
familiar with. Here is one of the issues I am having :

I run Paraview in Client-Server mode, performing the data processing and the 
rendering on the remote server, and I read a Tecplot Binary File (.plt) 
composed of more than 30 millions of points. This take a lot of time. An idea 
to speed up the calculation is to launch the server in parallel. I know that 
many readers can not read in parallel (it is the case of 
TecplotBinaryFileReader I think) so I don't expect any improvment in this way.

But, examining the Timer Log, I noticed that it doesn't speed up the rendering 
either. I tested many times displaying the points and both experiment with 
parallelism and without gave the same results (about 40-50 sec). I don't 
understand why.

Do I misinterpret the Timer Log ? Is the time of rendering long enough to 
conclude ? Do I have to set specific parameters to make it works ?

I thank you in advance for your help.

Jérémy


_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to