Re: [OMPI users] Issues with OpenMPI 1.8.2, GCC 4.9.1, and SLURM Interactive Jobs

2014-08-28 Thread Ralph Castain
I'm unaware of any changes to the Slurm integration between rc4 and final release. It sounds like this might be something else going on - try adding "--leave-session-attached --debug-daemons" to your 1.8.2 command line and let's see if any errors get reported. On Aug 28, 2014, at 12:20 PM, Mat

Re: [OMPI users] How does binding option affect network traffic?

2014-08-28 Thread Ralph Castain
On Aug 28, 2014, at 11:50 AM, McGrattan, Kevin B. Dr. wrote: > My institute recently purchased a linux cluster with 20 nodes; 2 sockets per > node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15 > jobs. Each job requires 16 MPI processes. For each job, I want to use two

[OMPI users] Issues with OpenMPI 1.8.2, GCC 4.9.1, and SLURM Interactive Jobs

2014-08-28 Thread Matt Thompson
Open MPI List, I recently encountered an odd bug with Open MPI 1.8.1 and GCC 4.9.1 on our cluster (reported on this list), and decided to try it with 1.8.2. However, we seem to be having an issue with Open MPI 1.8.2 and SLURM. Even weirder, Open MPI 1.8.2rc4 doesn't show the bug. And the bug is: I

[OMPI users] How does binding option affect network traffic?

2014-08-28 Thread McGrattan, Kevin B. Dr.
My institute recently purchased a linux cluster with 20 nodes; 2 sockets per node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15 jobs. Each job requires 16 MPI processes. For each job, I want to use two cores on each node, mapping by socket. If I use these options: #PBS -l

Re: [OMPI users] mxm 3.0 and knem warnings

2014-08-28 Thread Brock Palen
Interesting, we are using 3.0 that is in MOFED, and that is also what is on the MXM download site. Kinda confusing. Brock Palen www.umich.edu/~brockp CAEN Advanced Computing XSEDE Campus Champion bro...@umich.edu (734)936-1985 On Aug 28, 2014, at 2:12 AM, Mike Dubman wrote: > btw, you may w

Re: [OMPI users] Mpirun 1.5.4 problems when request > 28 slots (updated findings)

2014-08-28 Thread Reuti
Am 28.08.2014 um 10:09 schrieb Lane, William: > I have some updates on these issues and some test results as well. > > We upgraded OpenMPI to the latest version 1.8.2, but when submitting jobs via > the SGE orte parallel environment received > errors whenever more slots are requested than there

Re: [OMPI users] long initialization

2014-08-28 Thread Timur Ismagilov
In OMPIĀ 1.9a1r32604 I get much better results: $ time mpirun --mca oob_tcp_if_include ib0 -np 1 ./hello_c Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI semenov@compiler-2 Distribution, ident: 1.9a1r32604, repo rev: r32604, Aug 26, 2014 (nightly snapshot tarball), 146) real 0m4.1

Re: [OMPI users] long initialization

2014-08-28 Thread Timur Ismagilov
I enclosure 2 files with output of two foloowing commands (OMPIĀ 1.9a1r32570) $time mpirun --leave-session-attached -mca oob_base_verbose 100 -np 1 ./hello_c >& out1.txt (Hello, world, I am ) real 1m3.952s user 0m0.035s sys 0m0.107s $time mpirun --leave-session-attached -mca oob_base_verbose

Re: [OMPI users] Mpirun 1.5.4 problems when request > 28 slots (updated findings)

2014-08-28 Thread Lane, William
I have some updates on these issues and some test results as well. We upgraded OpenMPI to the latest version 1.8.2, but when submitting jobs via the SGE orte parallel environment received errors whenever more slots are requested than there are actual cores on the first node allocated to the job.

Re: [OMPI users] mxm 3.0 and knem warnings

2014-08-28 Thread Mike Dubman
btw, you may want to use latest mxm v3.1 which is part of hpcx package http://www.mellanox.com/products/hpcx On Thu, Aug 28, 2014 at 4:10 AM, Brock Palen wrote: > Brice, et al. > > Thanks a lot for this info. We are setting up new builds of OMPI 1.8.2 > with knem and mxm 3.0, > > If we have qu