Re: [OMPI users] Receiving MPI messages of unknown size

2009-06-03 Thread Gus Correa
Hi Lars I wonder if you could always use blocking message passing on the preliminary send/receive pair that transmits the message size/header, then use non-blocking mode for the actual message. If the "message size/header" part transmits a small buffer, the preliminary send/recv pair will use

[OMPI users] Receiving MPI messages of unknown size

2009-06-03 Thread Lars Andersson
Hi, I'm trying to solve a problem of passing serializable, arbitrarily sized objects around using MPI and non-blocking communication. The problem I'm facing is what to do at the receiving end when expecting an object of unknown size, but at the same time not block on waiting for it. When using

Re: [OMPI users] top question

2009-06-03 Thread George Bosilca
Simon, it is a lot more difficult than it appears. You're right, select/poll can do it for any file descriptor, and shared mutexes/ conditions (despite the performance impact) can do it for shared memory. However, in the case where you have to support both simultaneously, what is the right

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread DEVEL Michel
Dear Reiner, Jeff, Gus and list, Thanks for your suggestions, I will test them tomorrow. I did not check your mails before because I was busy trying the gcc/gfortran way. I have other problems: - for static linking I am missing plenty of ibv_* routines. I saw on the net that they should be in a

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread Gus Correa
Hi Michel, Jeff, Rainer, list I have AMD Opteron Shanghai, and Intel 10.1017. I had trouble with the Intel -fast flag also. According to the ifort man page/help: -fast = -xT -O3 -ipo -no-prec-div -static (Each compiler vendor has a different -fast, PGI is another thing.) Intel doesn't allow

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread Ralph Castain
The changes Jeff mentioned are not in the 1.3 branch - not sure if they will come over there or not. I'm a little concerned in this thread that someone is reporting the process affinity binding changing - that shouldn't be happening, and my guess is that something outside of our control may be

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread Jeff Squyres
Rainer and I are still iterating on the trunk solution (we moved to an hg branch just for convenience for the moment). Note that the Fortran flags aren't too important to OMPI. We *only* use them in configure. OMPI doesn't contain any Fortran 77 code at all, and the F90 module is

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread Jeff Squyres
On Jun 3, 2009, at 11:40 AM, Ashley Pittman wrote: Wasn't there a discussion about this recently on the list, OMPI binds during MPI_Init() so it's possible for memory to be allocated on the wrong quad, the discussion was about moving the binding to the orte process as I recall? Yes. It's

Re: [OMPI users] top question

2009-06-03 Thread Number Cruncher
Jeff Squyres wrote: We get this question so much that I really need to add it to the FAQ. :-\ Open MPI currently always spins for completion for exactly the reason that Scott cites: lower latency. Arguably, when using TCP, we could probably get a bit better performance by blocking and

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread Ashley Pittman
On Wed, 2009-06-03 at 11:27 -0400, Jeff Squyres wrote: > On Jun 3, 2009, at 10:48 AM, wrote: > > > For HPL, try writing a bash script that pins processes to their > > local memory controllers using numactl before kicking off HPL. This > > is particularly helpful

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread Rainer Keller
Dear Michel, per the naming convention test in configure: ifort -fast will turn on -xHOST -O3 -ipo -no-prec-div -static, of which -ipo turns on interprocedural optimizations for multiple files. Here the compiled object file does not contain the symbols searched for in the configure-tests.

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread JACOB_LIBERMAN
Hi Jeff, Yes, this technique is particularly helpful for multi-threaded and works consistently across the various MPIs I test. Thanks, jacob > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Jeff Squyres > Sent: Wednesday, June

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread Jeff Squyres
On Jun 3, 2009, at 10:48 AM, wrote: For HPL, try writing a bash script that pins processes to their local memory controllers using numactl before kicking off HPL. This is particularly helpful when spawning more than 1 thread per process. The last line of your

Re: [OMPI users] Hypre

2009-06-03 Thread Jeff Squyres
I'm afraid I have no experience with Hypre -- sorry! :-( Do they have a support web site / mailing list somewhere? You might have better luck contacting them about their software. On Jun 3, 2009, at 11:05 AM, naveed wrote: Hi, I wanted to know if any have used Hypre library for the

Re: [OMPI users] top question

2009-06-03 Thread Eugene Loh
tsi...@coas.oregonstate.edu wrote: Thanks for the explanation. I am using GigEth + Open MPI and the buffered MPI_BSend. I had already noticed that top behaved differently on another cluster with Infinibandb + MPICH. So the only option to find out how much time each process is waiting

[OMPI users] Hypre

2009-06-03 Thread naveed
Hi, I wanted to know if any have used Hypre library for the solution of Ax = b for of equations. I have problems reading in matrix file. I went through user manual, but couldn't get much out of it. I wanted to know what will be the best file format for reading large sparse matrices with Hypre.

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread JACOB_LIBERMAN
Hi Iftikhar, For HPL, try writing a bash script that pins processes to their local memory controllers using numactl before kicking off HPL. This is particularly helpful when spawning more than 1 thread per process. The last line of your script should look like "numactl -c $cpu_bind -m $

Re: [OMPI users] top question

2009-06-03 Thread tsilva
Thanks for the explanation. I am using GigEth + Open MPI and the buffered MPI_BSend. I had already noticed that top behaved differently on another cluster with Infinibandb + MPICH. So the only option to find out how much time each process is waiting around seems to be to profile the

Re: [OMPI users] top question

2009-06-03 Thread Jeff Squyres
We get this question so much that I really need to add it to the FAQ. :-\ Open MPI currently always spins for completion for exactly the reason that Scott cites: lower latency. Arguably, when using TCP, we could probably get a bit better performance by blocking and allowing the kernel

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread DEVEL Michel
Hi again, In fact I forgot to put back to '-fast -C' the FCFLAGS variable (from '-O3 -C'). There is still an error (many opal_*_* subroutines not found during the ipo step) at the same place, coming from the fact that "ld: attempted static link of dynamic object

[OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread DEVEL Michel
Dear openMPI users and developers, I have just tried installing openmpi by compiling it rather than just using a rpm because I want to use it with the ifort compiler. I have noticed a problem in the configure script (present at least in version 1.3.1 and 1.3.2) for the determination of Fortran

Re: [OMPI users] top question

2009-06-03 Thread Scott Atchley
On Jun 3, 2009, at 6:05 AM, tsi...@coas.oregonstate.edu wrote: Top always shows all the paralell processes at 100% in the %CPU field, although some of the time these must be waiting for a communication to complete. How can I see actual processing as opposed to waiting at a barrier?

[OMPI users] top question

2009-06-03 Thread tsilva
Top always shows all the paralell processes at 100% in the %CPU field, although some of the time these must be waiting for a communication to complete. How can I see actual processing as opposed to waiting at a barrier? Thanks, Tiago

Re: [OMPI users] Exit Program Without Calling MPI_Finalize For Special Case

2009-06-03 Thread Ralph Castain
I'm afraid there is no way to do this in 1.3.2 (or any OMPI distributed release) with MPI applications. The OMPI trunk does provide continuous re-spawn of failed processes, mapping them to other nodes and considering fault relationships between nodes, but this only works if they are -not-

[OMPI users] Exit Program Without Calling MPI_Finalize For Special Case

2009-06-03 Thread Tee Wen Kai
Hi,   I am writing a program for a central controller that will spawn processes depend on the user selection. And when there is some fault in the spawn processes like for example, the computer that is spawned with the process suddenly go down, the controller should react to this and respawn the

Re: [OMPI users] Openmpi and processor affinity

2009-06-03 Thread Iftikhar Rathore
Guss Thanks for the reply and it was a typo (Im sick). I have updated to 1.3.2 since my last post and have tried checking cpu affinity by using f and j it shows processes spread on all 8 cores in the beginning, but it does eventually shows all processes running on 0, My P and Q's are made for