Re: [OMPI users] Qlogic & openmpi

2011-11-28 Thread arnaud Heritier
I do have a contract and i tried to open a case, but their support is ..Anyway. I'm stii working on the strange error message from mpirun saying it can't allocate memory when at the same time it also reports that the memory is unlimited ... Arnaud On Tue, Nov 29, 2011 at 4:23 AM, Jeff

Re: [OMPI users] problem with fortran, MPI_REDUCE and MPI_IN_PLACE

2011-11-28 Thread Jeff Squyres
Unfortunately, this is a known issue. :-\ I have not found a reliable way to deduce that MPI_IN_PLACE has been passed as the parameter to MPI_REDUCE (and friends) on OS X. There's something very strange going on with regards to the Fortran compiler and common block variables (which is where

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Jeff Squyres
On Nov 28, 2011, at 7:39 PM, Ralph Castain wrote: >> Meaning that per my output from above, what Paul was trying should have >> worked, no? I.e., setenv'ing OMPI_, and those env vars should >> magically show up in the launched process. > > In the -launched process- yes. However, his problem

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Ralph Castain
On Nov 28, 2011, at 5:32 PM, Jeff Squyres wrote: > On Nov 28, 2011, at 6:56 PM, Ralph Castain wrote: > Right-o. Knew there was something I forgot... > >> So on rsh, we do not put envar mca params onto the orted cmd line. This has >> been noted repeatedly on the user and devel lists, so it

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Jeff Squyres
On Nov 28, 2011, at 6:56 PM, Ralph Castain wrote: > I'm afraid that example is incorrect - you were running under slurm on your > cluster, not rsh. Ummm... right. Duh. > If you look at the actual code, you will see that we slurp up the envars into > the environment of each app_context, and

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Ralph Castain
I'm afraid that example is incorrect - you were running under slurm on your cluster, not rsh. If you look at the actual code, you will see that we slurp up the envars into the environment of each app_context, and then send that to the backend. In environments like slurm, we can also apply

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Jeff Squyres
On Nov 28, 2011, at 5:39 PM, Jeff Squyres wrote: > (off list) Hah! So much for me discretely asking off-list before coming back with a definitive answer... :-\ -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/

Re: [OMPI users] How are the Open MPI processes spawned?

2011-11-28 Thread Jeff Squyres
(off list) Are you sure about OMPI_MCA_* params not being treated specially? I know for a fact that they *used* to be. I.e., we bundled up all env variables that began with OMPI_MCA_* and sent them with the job to back-end nodes. It allowed sysadmins to set global MCA param values without

Re: [OMPI users] Deadlock at MPI_FInalize

2011-11-28 Thread Jeff Squyres
+1 on Terry's questions. Have you modified Open MPI? You were asking before about various checkpoint/migration stuff; I'm not sure/don't remember if you were adding your own plugins to Open MPI. On Nov 28, 2011, at 9:07 AM, TERRY DONTJE wrote: > Are all the other processes gone? What

Re: [OMPI users] configure blcr errors openmpi 1.4.4

2011-11-28 Thread Vlad Popa
Hi! Josh Hursey open-mpi.org> writes: > > I wonder if the try_compile step is failing. Can you send a compressed > copy of your config.log from this build? > No need for that anymore .. you simply have to add the params "--enable-ft-thread" "--with-ft=cr" "--enable-mpi-threads" and

Re: [OMPI users] Deadlock at MPI_FInalize

2011-11-28 Thread Mudassar Majeed
No, I am using MPI_Ssend and MPI_Recv everywhere. regards, Mudassar From: Jeff Squyres To: Mudassar Majeed ; Open MPI Users Cc: "anas.alt...@gmail.com" Sent: Monday,

Re: [OMPI users] Deadlock at MPI_FInalize

2011-11-28 Thread TERRY DONTJE
Are all the other processes gone? What version of OMPI are you using? On 11/28/2011 9:00 AM, Mudassar Majeed wrote: Dear people, In my MPI application, all the processes call the MPI_Finalize (all processes reach there) but the rank 0 process could not finish with

Re: [OMPI users] Deadlock at MPI_FInalize

2011-11-28 Thread Jeff Squyres
Do you have any outstanding MPI requests (e.g., uncompleted isends or irecvs)? On Nov 28, 2011, at 9:00 AM, Mudassar Majeed wrote: > > Dear people, > In my MPI application, all the processes call the > MPI_Finalize (all processes reach there) but the rank 0 process could

[OMPI users] Deadlock at MPI_FInalize

2011-11-28 Thread Mudassar Majeed
Dear people,   In my MPI application, all the processes call the MPI_Finalize (all processes reach there) but the rank 0 process could not finish with MPI_Finalize and the application remains running. Please suggest what can be the cause of that. regards, Mudassar

[OMPI users] Open MPI and SLURM_CPUS_PER_TASK

2011-11-28 Thread Igor Geier
Dear all, there's been some discussions about this already, but the issue is still there (in 1.4.4). When running SLURM jobs with the --cpus-per-task parameter set (e.g. when running Open MPI-OpenMP jobs, so that --cpus-per-tasks corresponds to the number of OpenMP threads per rank), you get