I'm unaware of any changes to the Slurm integration between rc4 and final
release. It sounds like this might be something else going on - try adding
"--leave-session-attached --debug-daemons" to your 1.8.2 command line and let's
see if any errors get reported.
On Aug 28, 2014, at 12:20 PM, Mat
On Aug 28, 2014, at 11:50 AM, McGrattan, Kevin B. Dr.
wrote:
> My institute recently purchased a linux cluster with 20 nodes; 2 sockets per
> node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15
> jobs. Each job requires 16 MPI processes. For each job, I want to use two
Open MPI List,
I recently encountered an odd bug with Open MPI 1.8.1 and GCC 4.9.1 on our
cluster (reported on this list), and decided to try it with 1.8.2. However,
we seem to be having an issue with Open MPI 1.8.2 and SLURM. Even weirder,
Open MPI 1.8.2rc4 doesn't show the bug. And the bug is: I
My institute recently purchased a linux cluster with 20 nodes; 2 sockets per
node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15 jobs.
Each job requires 16 MPI processes. For each job, I want to use two cores on
each node, mapping by socket. If I use these options:
#PBS -l
Interesting, we are using 3.0 that is in MOFED, and that is also what is on the
MXM download site. Kinda confusing.
Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985
On Aug 28, 2014, at 2:12 AM, Mike Dubman wrote:
> btw, you may w
Am 28.08.2014 um 10:09 schrieb Lane, William:
> I have some updates on these issues and some test results as well.
>
> We upgraded OpenMPI to the latest version 1.8.2, but when submitting jobs via
> the SGE orte parallel environment received
> errors whenever more slots are requested than there
In OMPIĀ 1.9a1r32604 I get much better results:
$ time mpirun --mca oob_tcp_if_include ib0 -np 1 ./hello_c
Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI
semenov@compiler-2 Distribution, ident: 1.9a1r32604, repo rev: r32604, Aug 26,
2014 (nightly snapshot tarball), 146)
real 0m4.1
I enclosure 2 files with output of two foloowing commands (OMPIĀ 1.9a1r32570)
$time mpirun --leave-session-attached -mca oob_base_verbose 100 -np 1 ./hello_c
>& out1.txt
(Hello, world, I am )
real 1m3.952s
user 0m0.035s
sys 0m0.107s
$time mpirun --leave-session-attached -mca oob_base_verbose
I have some updates on these issues and some test results as well.
We upgraded OpenMPI to the latest version 1.8.2, but when submitting jobs via
the SGE orte parallel environment received
errors whenever more slots are requested than there are actual cores on the
first node allocated to the job.
btw, you may want to use latest mxm v3.1 which is part of hpcx package
http://www.mellanox.com/products/hpcx
On Thu, Aug 28, 2014 at 4:10 AM, Brock Palen wrote:
> Brice, et al.
>
> Thanks a lot for this info. We are setting up new builds of OMPI 1.8.2
> with knem and mxm 3.0,
>
> If we have qu
10 matches
Mail list logo