Andy, thanks for your suggestion, however it does not work, I guess what you have mentioned about that Paraview was not designed as a parallel grid generation tool is the crucial point here. I will try to find a solution for my problem without falling back to serial processing.
Guido Am Thu, 20 May 2010 10:55:21 -0400 schrieb Andy Bauer <[email protected]>: > If I remember correctly, after you partition your points, you try to > create a grid with the Delaunay2D filter. It looks like the > Delaunay2D filter isn't a "true" parallel filter meaning that each > process does the triangulation on all of the points that are assigned > to it but doesn't take into account creating cells between close > points that are on different processes. The best suggestion I have > is to try the following: 1) use D3 to partition your points > 2) triangulate on each process using Delaunay2D > 3) extract the edges on each process using Extract Edges > 4) try the Delaunay2D to stitch these edges together (this may not > work as desired) > 5) append the datasets/polydatas together using the Append Datasets > filter > > I have no idea whether or not this will work for you. The main > problem is that VTK/ParaView wasn't designed to be used as a parallel > grid generation tool. > > Andy > > On Wed, May 19, 2010 at 5:49 AM, Guido Staub <[email protected]> wrote: > > > So, with D3 things run faster now. Solved as well the mentioned > > problem, it was due to not scaling the z values. However, maybe a > > logical behaviour, the result of the triangulation now consists of > > four tiles. Anybody knows how to for example connect them? I have > > attached a screenshot showing the separation (vertical lines) of the > > tiles. > > > > Guido > > > > Am Tue, 18 May 2010 17:17:06 +0000 schrieb Guido Staub > > <[email protected]>: > > > > > Using Process ID Scalars results in an Data Array ProcessId which > > > in case of np 3 ranges from 0 to 2 (and from 0 to 3 in case of np > > > 4). I have selected colorBy ProcessID to see if the input is well > > > balanced or not, unfortunately all the points have the same > > > color. So I conclude that the data is not well balanced, right? > > > > > > If I apply the D3 filter I almost see the same results. The only > > > difference I have noticed is that there is no new Data Array. > > > > > > Delaunay 2D is now about three times faster after I applied D3 > > > filter. However I do not like the resulting triangulation, seems > > > like some isosurfaces have been calculted. > > > > > > Guido > > > > > > Am Thu, 13 > > > May 2010 20:48:11 -0400 schrieb Andy Bauer > > > <[email protected]>: > > > > > > > Process Id Scalars doesn't do any load balancing, it just shows > > > > what cells are assigned to what process. You can try the D3 > > > > filter to do actual load-balancing (your reader may already be > > > > doing this though in which case you would see the same results > > > > from a Process Id Scalars filter used after the D3 filter). > > > > I'm not that familiar with the internals of Delauney2D for > > > > distributed points but it's quite possible that for the set of > > > > points you have that the partitioning is not appropriate. As > > > > Ken said, using the Process Id Scalars to see if the input is > > > > fairly well balanced. > > > > > > > > Andy > > > > > > > > On Thu, May 13, 2010 at 3:33 PM, Guido Staub <[email protected]> > > > > wrote: > > > > > > > > > Process Id Scalars didn't do the trick, still no performance > > > > > gain with Delaunay 2D. > > > > > My data is stored in a h5 file which gets loaded through a > > > > > xdmf file. After running the Process Id Scalars filter I do > > > > > not see any changes in the distribution of the data. However > > > > > I guess that they are already "evenly distributed" by default. > > > > > > > > > > So maybe exporting the file to an other format will do it or > > > > > is there a problem with the Delaunay implementation in > > > > > general when running in parallel? > > > > > > > > > > Guido > > > > > > > > > > Am Thu, 13 May 2010 15:08:04 > > > > > -0600 schrieb "Moreland, Kenneth" <[email protected]>: > > > > > > > > > > > Ah, I see. It sounds like your data is not balanced. Many > > > > > > of the "non parallel" file formats will do something stupid > > > > > > when loading data in parallel. For example, they might load > > > > > > everything on process 0 or load everything everywhere. (And > > > > > > now that I think about it, the Delaunay filter may have > > > > > > trouble in parallel.) > > > > > > > > > > > > Try running the "Process Id Scalars" filter on your data. > > > > > > Do the points look evenly distributed? > > > > > > > > > > > > -Ken > > > > > > > > > > > > > > > > > > On 5/13/10 10:17 AM, "Guido Staub" <[email protected]> wrote: > > > > > > > > > > > > Well if I start pvserver by mpirun -np 4 pvserver I have 3 > > > > > > cores running at almost 100%. Now I connect to the server > > > > > > and start a Delaunay 2D calculation on one of my datasets. > > > > > > As a result all of the 4 cores are showing 100%. However I > > > > > > assume that there is only one core doing the job, because > > > > > > on the one hand calculation is really slow. I have done > > > > > > similar processing on an other PC (an outdated one) and > > > > > > there is no significant performance advantage as one would > > > > > > expect. And on the other hand, running pvserver with e.g. > > > > > > -np 2 results in 100% for 2 CPUs when starting the Delaunay > > > > > > 2D calc (1 core at 100% when cpu is idle). > > > > > > > > > > > > Guido > > > > > > > > > > > > > > > > > > Am Thu, 13 May 2010 > > > > > > 12:39:43 -0600 schrieb "Moreland, Kenneth" > > > > > > <[email protected]>: > > > > > > > > > > > > > I am afraid I simply don't understand the question. You > > > > > > > said in (1) that you have three cores running at 100%. > > > > > > > Then in (2) you said that you only have one core > > > > > > > running. Is it happening when you start the client, > > > > > > > connect the client to the server, launch the server from > > > > > > > the client, or something else? Is something running or > > > > > > > is the client sitting idle waiting for the user? > > > > > > > > > > > > > > -Ken > > > > > > > > > > > > > > > > > > > > > On 5/13/10 8:15 AM, "Guido Staub" <[email protected]> wrote: > > > > > > > > > > > > > > Thanks Ken, but I have already read this thread, > > > > > > > therefore I started the client process anyway without > > > > > > > taking care of cpu usage for now. > > > > > > > > > > > > > > However my second question still keeps me busy. Isn't it > > > > > > > possible to use all 4 cores? > > > > > > > > > > > > > > Guido > > > > > > > > > > > > > > Am Thu, 13 May 2010 > > > > > > > 10:29:13 -0600 schrieb "Moreland, Kenneth" > > > > > > > <[email protected]>: > > > > > > > > > > > > > > > The question about why the pvserver processes are > > > > > > > > always at 100% CPU comes up frequently on the mailing > > > > > > > > list (such as > > > > > > > > > > http://www.paraview.org/pipermail/paraview/2008-December/010338.html > > > > > ). > > > > > > > > I've added some information to the Wiki about it to > > > > > > > > provide an explanation: > > > > > > > > > > > > > > > http://www.paraview.org/Wiki/Setting_up_a_ParaView_Server#Server_processes_always_have_100.25_CPU_usage > > > > > > > > > > > > > > > > -Ken > > > > > > > > > > > > > > > > > > > > > > > > On 5/13/10 5:06 AM, "Guido Staub" <[email protected]> > > > > > > > > wrote: > > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > I have succesfully compiled Paraview with MPI support > > > > > > > > on my Workstation (Quad Core). I have read that > > > > > > > > paraview runs serial, pvserver parallel, so I started > > > > > > > > the server by mpirun -np 4 pvserver and connected > > > > > > > > through X. Everything seems to work fine. > > > > > > > > > > > > > > > > But there are two strange behaviours I have noticed: > > > > > > > > > > > > > > > > 1. CPU usage on workstation is almost 100% on three of > > > > > > > > the four cores although no client is connected (when I > > > > > > > > type mpirun -np 3 pvserver there are 2 out of 4 running > > > > > > > > at 100%; with -np 2 only 1). I have noticed this using > > > > > > > > MPICH2 and OpenMPI. > > > > > > > > > > > > > > > > 2. When I now start a client process the server uses > > > > > > > > only one core (-np 4/3/2/1). Why? > > > > > > > > > > > > > > > > Does MPI not work on multicore systems as on > > > > > > > > multiprocessor systems or is this a Paraview issue? > > > > > > > > > > > > > > > > Thanks, > > > > > > > > Guido > > > > > > > > _______________________________________________ > > > > > > > > Powered by www.kitware.com > > > > > > > > > > > > > > > > Visit other Kitware open-source projects at > > > > > > > > http://www.kitware.com/opensource/opensource.html > > > > > > > > > > > > > > > > Please keep messages on-topic and check the ParaView > > > > > > > > Wiki at: http://paraview.org/Wiki/ParaView > > > > > > > > > > > > > > > > Follow this link to subscribe/unsubscribe: > > > > > > > > http://www.paraview.org/mailman/listinfo/paraview > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > **** Kenneth Moreland > > > > > > > > *** Sandia National Laboratories > > > > > > > > *********** > > > > > > > > *** *** *** email: [email protected] > > > > > > > > ** *** ** phone: (505) 844-8919 > > > > > > > > *** web: > > > > > > > > http://www.cs.unm.edu/~kmorel<http://www.cs.unm.edu/%7Ekmorel> > > <http://www.cs.unm.edu/%7Ekmorel> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > **** Kenneth Moreland > > > > > > > *** Sandia National Laboratories > > > > > > > *********** > > > > > > > *** *** *** email: [email protected] > > > > > > > ** *** ** phone: (505) 844-8919 > > > > > > > *** web: > > > > > > > http://www.cs.unm.edu/~kmorel > > > > > > > <http://www.cs.unm.edu/%7Ekmorel>< > > http://www.cs.unm.edu/%7Ekmorel> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > **** Kenneth Moreland > > > > > > *** Sandia National Laboratories > > > > > > *********** > > > > > > *** *** *** email: [email protected] > > > > > > ** *** ** phone: (505) 844-8919 > > > > > > *** web: > > > > > > http://www.cs.unm.edu/~kmorel > > > > > > <http://www.cs.unm.edu/%7Ekmorel>< > > http://www.cs.unm.edu/%7Ekmorel> > > > > > > > > > > > _______________________________________________ > > > > > Powered by www.kitware.com > > > > > > > > > > Visit other Kitware open-source projects at > > > > > http://www.kitware.com/opensource/opensource.html > > > > > > > > > > Please keep messages on-topic and check the ParaView Wiki at: > > > > > http://paraview.org/Wiki/ParaView > > > > > > > > > > Follow this link to subscribe/unsubscribe: > > > > > http://www.paraview.org/mailman/listinfo/paraview > > > > > > > > _______________________________________________ > > > Powered by www.kitware.com > > > > > > Visit other Kitware open-source projects at > > > http://www.kitware.com/opensource/opensource.html > > > > > > Please keep messages on-topic and check the ParaView Wiki at: > > > http://paraview.org/Wiki/ParaView > > > > > > Follow this link to subscribe/unsubscribe: > > > http://www.paraview.org/mailman/listinfo/paraview > > > > > _______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Follow this link to subscribe/unsubscribe: http://www.paraview.org/mailman/listinfo/paraview
