Process Id Scalars didn't do the trick, still no performance gain with Delaunay 2D. My data is stored in a h5 file which gets loaded through a xdmf file. After running the Process Id Scalars filter I do not see any changes in the distribution of the data. However I guess that they are already "evenly distributed" by default.
So maybe exporting the file to an other format will do it or is there a problem with the Delaunay implementation in general when running in parallel? Guido Am Thu, 13 May 2010 15:08:04 -0600 schrieb "Moreland, Kenneth" <[email protected]>: > Ah, I see. It sounds like your data is not balanced. Many of the > "non parallel" file formats will do something stupid when loading > data in parallel. For example, they might load everything on process > 0 or load everything everywhere. (And now that I think about it, the > Delaunay filter may have trouble in parallel.) > > Try running the "Process Id Scalars" filter on your data. Do the > points look evenly distributed? > > -Ken > > > On 5/13/10 10:17 AM, "Guido Staub" <[email protected]> wrote: > > Well if I start pvserver by mpirun -np 4 pvserver I have 3 cores > running at almost 100%. Now I connect to the server and start a > Delaunay 2D calculation on one of my datasets. As a result all of the > 4 cores are showing 100%. However I assume that there is only one > core doing the job, because on the one hand calculation is really > slow. I have done similar processing on an other PC (an outdated one) > and there is no significant performance advantage as one would expect. > And on the other hand, running pvserver with e.g. -np 2 results in > 100% for 2 CPUs when starting the Delaunay 2D calc (1 core at 100% > when cpu is idle). > > Guido > > > Am Thu, 13 May 2010 > 12:39:43 -0600 schrieb "Moreland, Kenneth" <[email protected]>: > > > I am afraid I simply don't understand the question. You said in (1) > > that you have three cores running at 100%. Then in (2) you said > > that you only have one core running. Is it happening when you > > start the client, connect the client to the server, launch the > > server from the client, or something else? Is something running or > > is the client sitting idle waiting for the user? > > > > -Ken > > > > > > On 5/13/10 8:15 AM, "Guido Staub" <[email protected]> wrote: > > > > Thanks Ken, but I have already read this thread, therefore I started > > the client process anyway without taking care of cpu usage for now. > > > > However my second question still keeps me busy. Isn't it possible to > > use all 4 cores? > > > > Guido > > > > Am Thu, 13 May 2010 > > 10:29:13 -0600 schrieb "Moreland, Kenneth" <[email protected]>: > > > > > The question about why the pvserver processes are always at 100% > > > CPU comes up frequently on the mailing list (such as > > > http://www.paraview.org/pipermail/paraview/2008-December/010338.html). > > > I've added some information to the Wiki about it to provide an > > > explanation: > > > http://www.paraview.org/Wiki/Setting_up_a_ParaView_Server#Server_processes_always_have_100.25_CPU_usage > > > > > > -Ken > > > > > > > > > On 5/13/10 5:06 AM, "Guido Staub" <[email protected]> wrote: > > > > > > Hi all, > > > > > > I have succesfully compiled Paraview with MPI support on my > > > Workstation (Quad Core). I have read that paraview runs serial, > > > pvserver parallel, so I started the server by mpirun -np 4 > > > pvserver and connected through X. Everything seems to work fine. > > > > > > But there are two strange behaviours I have noticed: > > > > > > 1. CPU usage on workstation is almost 100% on three of the four > > > cores although no client is connected (when I type mpirun -np 3 > > > pvserver there are 2 out of 4 running at 100%; with -np 2 only 1). > > > I have noticed this using MPICH2 and OpenMPI. > > > > > > 2. When I now start a client process the server uses only one core > > > (-np 4/3/2/1). Why? > > > > > > Does MPI not work on multicore systems as on multiprocessor > > > systems or is this a Paraview issue? > > > > > > Thanks, > > > Guido > > > _______________________________________________ > > > Powered by www.kitware.com > > > > > > Visit other Kitware open-source projects at > > > http://www.kitware.com/opensource/opensource.html > > > > > > Please keep messages on-topic and check the ParaView Wiki at: > > > http://paraview.org/Wiki/ParaView > > > > > > Follow this link to subscribe/unsubscribe: > > > http://www.paraview.org/mailman/listinfo/paraview > > > > > > > > > > > > > > > **** Kenneth Moreland > > > *** Sandia National Laboratories > > > *********** > > > *** *** *** email: [email protected] > > > ** *** ** phone: (505) 844-8919 > > > *** web: http://www.cs.unm.edu/~kmorel > > > > > > > > > > > > > **** Kenneth Moreland > > *** Sandia National Laboratories > > *********** > > *** *** *** email: [email protected] > > ** *** ** phone: (505) 844-8919 > > *** web: http://www.cs.unm.edu/~kmorel > > > > > > > **** Kenneth Moreland > *** Sandia National Laboratories > *********** > *** *** *** email: [email protected] > ** *** ** phone: (505) 844-8919 > *** web: http://www.cs.unm.edu/~kmorel > _______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Follow this link to subscribe/unsubscribe: http://www.paraview.org/mailman/listinfo/paraview
