Hi John, I compared a)MPI_Send/MPI_Recv with b)MPI_ISend/MPI_IRecv/MPI_Wait and it depends on the situation which style is the fastest - style b is not simply worse than style a. If you have high latency because of your network-hardware you are better of with style b. But it is difficult to tell, because there is not much difference.
Anyway, is there one single part in the paraview-code, which handles the MPI-Stuff, or is MPI all over the place? If first is the case, it would be easy to implement some "wait"-packet+constant polling of process 0 to get rid of MPI-lib-dependens, but processor and/or memory affinity is probably a problem then ... (http://www.open-mpi.org/faq/?category=tuning#using-paffinity) Greetings Jens John M. Patchett schrieb: > Hi Jens, > I would think that each pvserver process would have to be able to > detect that it was in a lengthy wait state because it sure would suck to > have an MPI_Wait in the middle of each send/recv pair during compositing > ... And it would probably be difficult to handle the tiled display > communication. > The design of a solution might not be so straight forward .... I > think, you would have to first decide that it really is a problem > versus cost of doing business. > -John. > > On Dec 5, 2008, at 9:53 AM, Jens wrote: > >> Hi John, >> >> thought about this problem again... >> >> A solution could be >> a)to use MPI_IRecv/MPI_Wait instead of MPI_Recv. >> >> If that results in the same 100% cpu for the MPI_Wait-command, >> b) it could be a solution to add a wait()/sleep() just beweeen MPI_IRecv >> and MPI_Wait. >> How long this wait/sleep would be could depend >> a) on the time the function MPI_Wait takes to return. >> b) or on some value send by process 0 (process 0 could send other >> processes to sleep for a while) >> >> What do you think? >> >> Greetings >> Jens >> >> John M. Patchett schrieb: >>> Hi Jens, >>> Your pvserver is probably waiting on an MPI_Recv and your MPI >>> implementation is spinning. >>> You will note that process 0 probably isn't doing this, as the other >>> nodes are waiting on process 0 to send. >>> I have searched this problem all the way to the MPI developers as it's >>> easy to replicate without paraview and the MPI guys assure me the >>> alternatives are worse. >>> -John. >>> >>> On Dec 5, 2008, at 8:42 AM, Jens wrote: >>> >>>> Hi, >>>> >>>> if I run "mpirun -np 4 ./pvserver" on our cluster-node and connect from >>>> my client, this pvserver always shows 100% cpu usage - even if I do >>>> nothing at the client. >>>> >>>> Seems to me as if there is a loop waiting for the client to ask for >>>> action - but this loop is calling no wait/sleep function. >>>> >>>> Greetings >>>> Jens >>>> _______________________________________________ >>>> ParaView mailing list >>>> ParaView@paraview.org >>>> http://www.paraview.org/mailman/listinfo/paraview >>> > _______________________________________________ ParaView mailing list ParaView@paraview.org http://www.paraview.org/mailman/listinfo/paraview