Dear Ken, 

thanks for the reply. i have one more question to ask.
in the timer log, the first 3 lines:

--------------------------------------------------
Local Process

Still Render, 3.029 seconds

Execute vtkMPIMoveData id: 457, 1.98248 seconds
--------------------------------------------------

does these mean that, at the client's end, it took 3.029 seconds to renders the 
geometry send back from process 0?
what does still render means? how about the third line? after still render, why 
does the MPI need to move data? which data? to where?

i tried running the same data using np1 until np10, and i get the following 
reading for still render, the
rest of the readings are quite consistent, which makes me wonder why
the time (still render) decrease initially but increase eventually? shouldn't it
become much faster when more nodes are used? all my server nodes are identical 
to each other (intel core 2 duo, 2.66 GHz)



----------------------------------------------




        
        
        
        
        
        


        
        
                
                        number of process used 
                        still render (seconds)
                
                
                        np1
                        7.16043
                
                
                        np2
                        3.93390
                
                
                        np3
                        3.05784
                
                
                        np4
                        3.02900
                
                
                        np5
                        3.02851
                
                
                        np6
                        3.04962
                
                
                        np7
                        3.43479
                
                
                        np8
                        3.47883
                
                
                        np9
                        3.71554
                
                
                        np10
                        3.80835
                
        


----------------------------------------------

 
appreciate your reply!

regards,
chewping


From: [email protected]
To: [email protected]; [email protected]
CC: [email protected]
Date: Mon, 28 Sep 2009 08:37:08 -0600
Subject: Re: [Paraview] How to interpret timer log





Re: [Paraview] How to interpret timer log


Both scenarios are wrong.  ParaView will not push out data from process 0 to 
processes 1-3 unless you explicitly run a filter that does that (or the reader 
does that internally, but I know of no such reader).  What is actually 
happening is more along the lines of:



Processes 0-3 each read in a partition of data from the file.
Each process extracts polygonal geometry from their local data.
Per your settings, ParaView decides to send the geometry to the client.  The 
data is collected to process 0 and sent to the client.



The reason you are not seeing vtkFileSeriesReader on all of the servers is that 
there is a threshold in the timer log to not show anything that executes under 
a certain amount of time (by default 0.01 seconds).  If you change the Time 
Threshold to Show All, you should be able to see everything that executes, even 
if it completes immediately.



You should note that how readers read partitions is determined by the reader 
itself.  Many of the readers do not really handle partitioned reading.  Thus, 
the reader will do something naïve like read everything on process 0.  Based on 
your timings, that is probably what is happening to you.  That is, processes 
1-3 probably have empty data.  You never specified what format of data you are 
reader, so I cannot answer the data completely.  However, if you want to know 
how your data is partitioned on the server (at least, before rendering), you 
can run the Process Ids filter.



-Ken





On 9/24/09 9:09 PM, "chew ping" <[email protected]> wrote:



Hi all,

 

I'm doing parallel rendering using 1 client (dual core laptop) and 2 cluster 
servers (dual core desktop)

below is the timer log result i collected when i run: mpirun -np 4 pvserver

 

-----------------------------------------------------------------------------------------------

Local Process

Still Render, 3.029 seconds

Execute vtkMPIMoveData id: 457, 1.98248 seconds

 

 

Server, Process 0

Execute vtkFileSeriesReader id: 176, 0.637821 seconds

Execute vtkMPIMoveData id: 457, 1.49186 seconds

Dataserver gathering to 0, 0.829525 seconds

Dataserver sending to client, 0.661658 seconds

 

Server, Process 1

Execute vtkMPIMoveData id: 457, 0.141821 seconds

Dataserver gathering to 0, 0.141544 seconds

 

Server, Process 2

Execute vtkMPIMoveData id: 457, 0.243584 seconds

Dataserver gathering to 0, 0.243318 seconds

 

Server, Process 3

Execute vtkMPIMoveData id: 457, 0.191589 seconds

Dataserver gathering to 0, 0.191303 seconds

 

-----------------------------------------------------------------------------------------------------

 

i have difficulty interpreting the timer log, my guess is:

 

Scenario 1: 

Process 0 reads the whole data, disseminate the dats into 4 pieces, then 
distribute to itself and Process 1&2&3,

each node will process the data and send it back Process 0,

Process 0 gather all data and send it back to client,

client renders the data

 

Scenario 2:

Process 0 reads the whole data, distribute the whole data to Process 0&1&2&3,

each node will 'take' their own piece of data to process, then send it back 
Process 0,

Process 0 gather all data and send it back to client,

client renders the data

   

Which scenario is the correct one? or both are wrong?

 

is there any resources i could refer to find what does it mean by: Execute 
vtkFileSeriesReader, Execute vtkMPIMoveData?

 

any help / feedback is highly appreciated!

thanks!

 

regards,

chewping

 

 

 

 

       





   ****      Kenneth Moreland

    ***      Sandia National Laboratories

***********  

*** *** ***  email: [email protected]

**  ***  **  phone: (505) 844-8919

    ***      web:   http://www.cs.unm.edu/~kmorel



                                          
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview

Reply via email to