[OMPI users] MPIIO and derived data types

2014-07-20 Thread Tom Rosmond
Hello, For several years I have successfully used MPIIO in a Fortran global atmospheric ensemble data assimilation system. However, I always wondered if I was fully exploiting the power of MPIIO, specifically by using derived data types to better describe memory and file data layouts. All of my

Re: [OMPI users] Fwd: Re[4]: Salloc and mpirun problem

2014-07-20 Thread Ralph Castain
Yeah, we aren't connecting back - is there a firewall running? You need to leave the "--debug-daemons --mca plm_base_verbose 5" on there as well to see the entire problem. What you can see here is that mpirun is listening on several interfaces: > [access1:24264] [[55095,0],0] oob:tcp:init addin

Re: [OMPI users] Fwd: Re[4]: Salloc and mpirun problem

2014-07-20 Thread Timur Ismagilov
Пересылаемое сообщение От кого: Timur Ismagilov Кому: Ralph Castain Дата: Sun, 20 Jul 2014 21:58:41 +0400 Тема: Re[2]: [OMPI users] Fwd: Re[4]: Salloc and mpirun problem Here it is: $ salloc -N2 --exclusive -p test -J ompi salloc: Granted job allocation 647049 $ mpirun -mc

Re: [OMPI users] Help with multirail configuration

2014-07-20 Thread Tobias Kloeffel
I found no option in 1.6.5 and 1.8.1... Am 7/20/2014 6:29 PM, schrieb Ralph Castain: What version of OMPI are you talking about? On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel wrote: Hello everyone, I am trying to get the maximum performance out of my two node testing setup. Each node consi

Re: [OMPI users] Mpirun 1.5.4 problems when request > 28 slots

2014-07-20 Thread Ralph Castain
I'm unaware of any CentOS-OMPI bug, and I've been using CentOS throughout the 6.x series running OMPI 1.6.x and above. I can't speak to the older versions of CentOS and/or the older versions of OMPI. On Jul 19, 2014, at 8:14 PM, Lane, William wrote: > Yes there is a second HPC Sun Grid Engine

Re: [OMPI users] Help with multirail configuration

2014-07-20 Thread Ralph Castain
What version of OMPI are you talking about? On Jul 20, 2014, at 9:11 AM, Tobias Kloeffel wrote: > Hello everyone, > > I am trying to get the maximum performance out of my two node testing setup. > Each node consists of 4 Sandy Bridge CPUs and each CPU has one directly > attached Mellanox QDR

Re: [OMPI users] Fwd: Re[4]: Salloc and mpirun problem

2014-07-20 Thread Ralph Castain
Try adding -mca oob_base_verbose 10 -mca rml_base_verbose 10 to your cmd line. It looks to me like we are unable to connect back to the node where you are running mpirun for some reason. On Jul 20, 2014, at 9:16 AM, Timur Ismagilov wrote: > I have the same problem in openmpi 1.8.1(Apr 23, 201

[OMPI users] Fwd: Re[4]: Salloc and mpirun problem

2014-07-20 Thread Timur Ismagilov
I have the same problem in openmpi 1.8.1( Apr 23, 2014 ). Does the srun command have  a --map-by mpirun parameter, or can i chage it from bash enviroment? Пересылаемое сообщение От кого: Timur Ismagilov Кому: Mike Dubman Копия: Open MPI Users Дата: Thu, 17 Jul 2014 16:42:24

[OMPI users] Help with multirail configuration

2014-07-20 Thread Tobias Kloeffel
Hello everyone, I am trying to get the maximum performance out of my two node testing setup. Each node consists of 4 Sandy Bridge CPUs and each CPU has one directly attached Mellanox QDR card. Both nodes are connected via a 8-port Mellanox switch. So far I found no option that allows binding m

Re: [OMPI users] after mpi_finalize(Err)

2014-07-20 Thread Ralph Castain
On Jul 20, 2014, at 7:11 AM, Diego Avesani wrote: > Dear all, > I have a question about mpi_finalize. > > After mpi_finalize the program returs single core, Have I understand > correctly? No - we don't kill any processes. We just tear down the MPI system. All your processes continue to execu

[OMPI users] after mpi_finalize(Err)

2014-07-20 Thread Diego Avesani
Dear all, I have a question about mpi_finalize. After mpi_finalize the program returs single core, Have I understand correctly? In this case I do not understand something: In my program I have something like: call MPI_FINALIZE(rc) write(*,*) "hello world" However, my program write it many times