In searching to confirm my belief that recent versions of Open MPI support the
MPI-3.1 standard, I was a bit surprised to find this in the mpi.3 man page from
the 4.0.2 release:
"The outcome, known as the MPI Standard, was first published in 1993; its
most recent version (MPI-2) was
Lost a line…
Also helpful to check
$ srun -N3 which ompi_info
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Riebs, Andy
via users
Sent: Monday, April 27, 2020 10:59 AM
To: Open MPI Users
Cc: Riebs, Andy
Subject: Re: [OMPI users] Can't start jobs with srun.
Y’know
script was to launch my code with
mpirun. As mpirun was only finding one slot per nodes I have used
"--oversubscribe --bind-to core" and checked that every process was
binded on a separate core. It worked but do not ask me why :-)
Patrick
Le 24/04/2020 à 20:28, Riebs, Andy via users a écrit :
Prentice, have you tried something trivial, like "srun -N3 hostname", to rule
out non-OMPI problems?
Andy
-Original Message-
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Prentice
Bisbal via users
Sent: Friday, April 24, 2020 2:19 PM
To: Ralph Castain ; Open MPI
Is there any chance that the fact that Riddhi appears to be trying to execute
an uncompiled hello.c could be the problem here?
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Jeff Squyres
(jsquyres) via users
Sent: Monday, August 19, 2019 2:05 PM
To: Open MPI User's List
Cc:
Thomas, your test case is somewhat similar to a bash fork() bomb -- not the
same, but similar. After running one of your failing jobs, you might check to
see if the “out-of-memory” (“OOM”) killer has been invoked. If it has, that can
lead to unexpected consequences, such as what you’ve
To close the thread here… I got the following information:
Looking at SLURM_CPU_BIND is the right idea, but there are quite a few more
options. It misses map_cpu, rank, plus the NUMA-based options:
rank_ldom, map_ldom, and mask_ldom. See the srun man pages for documentation.
From: Riebs
My fault, I thought the tar ball name looked funny :-)
Will try again tomorrow
Andy
--
Andy Riebs
andy.ri...@hp.com
Original message
From: Ralph Castain
Date:04/12/2015 3:10 PM (GMT-05:00)
To: Open MPI Users
Subject: Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on