Hi Jens,
If I recall correctly, your observation lies in the type of hardware you are using. I think some hardware allows a developer to leverage an interrupt while some requires polling for a received message. Design requirements for MPI to be fast with low latency usually (I presume) outweigh lowering heat production.

I think I first noticed this issue with mvapich when infiniband was new. I don't think we had it with Myrinet 2000 or Gig-E.

I also think ParaView is a normal mpi application, we just don't notice machines spinning while waiting during batch processing, we just assume they're working hard.

-John.

On Dec 5, 2008, at 9:42 AM, Jens wrote:

Hi John,

thanks for your answer. That makes sense. "Normal" mpi-apps are probably not written to wait for more things to do - they are simply always busy.

It is just a pity that the cluster has to run 100% producing a lot of
heat for nothing.

So the MPI-lib will probably not change this behavior :( ? (I am using
open-mpi 1.2.8)

Greetings
Jens


John M. Patchett schrieb:
Hi Jens,
  Your pvserver is probably waiting on an MPI_Recv and your MPI
implementation is spinning.
You will note that process 0 probably isn't doing this, as the other
nodes are waiting on process 0 to send.
I have searched this problem all the way to the MPI developers as it's
easy to replicate without paraview and the MPI guys assure me the
alternatives are worse.
-John.

On Dec 5, 2008, at 8:42 AM, Jens wrote:

Hi,

if I run "mpirun -np 4 ./pvserver" on our cluster-node and connect from
my client, this pvserver always shows 100% cpu usage - even if I do
nothing at the client.

Seems to me as if there is a loop waiting for the client to ask for
action - but this loop is calling no wait/sleep function.

Greetings
Jens
_______________________________________________
ParaView mailing list
[email protected]
http://www.paraview.org/mailman/listinfo/paraview


_______________________________________________
ParaView mailing list
[email protected]
http://www.paraview.org/mailman/listinfo/paraview

Reply via email to