[OMPI users] Is the mpi.3 manpage out of date?

2020-08-25 Thread Riebs, Andy via users
In searching to confirm my belief that recent versions of Open MPI support the MPI-3.1 standard, I was a bit surprised to find this in the mpi.3 man page from the 4.0.2 release: "The outcome, known as the MPI Standard, was first published in 1993; its most recent version (MPI-2) was

Re: [OMPI users] Can't start jobs with srun.

2020-04-27 Thread Riebs, Andy via users
Lost a line… Also helpful to check $ srun -N3 which ompi_info From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Riebs, Andy via users Sent: Monday, April 27, 2020 10:59 AM To: Open MPI Users Cc: Riebs, Andy Subject: Re: [OMPI users] Can't start jobs with srun. Y’know

Re: [OMPI users] Can't start jobs with srun.

2020-04-27 Thread Riebs, Andy via users
script was to launch my code with mpirun. As mpirun was only finding one slot per nodes I have used "--oversubscribe --bind-to core" and checked that every process was binded on a separate core. It worked but do not ask me why :-) Patrick Le 24/04/2020 à 20:28, Riebs, Andy via users a écrit :

Re: [OMPI users] Can't start jobs with srun.

2020-04-24 Thread Riebs, Andy via users
Prentice, have you tried something trivial, like "srun -N3 hostname", to rule out non-OMPI problems? Andy -Original Message- From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Prentice Bisbal via users Sent: Friday, April 24, 2020 2:19 PM To: Ralph Castain ; Open MPI

Re: [OMPI users] **URGENT: Error during testing

2019-08-19 Thread Riebs, Andy via users
Is there any chance that the fact that Riddhi appears to be trying to execute an uncompiled hello.c could be the problem here? From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Jeff Squyres (jsquyres) via users Sent: Monday, August 19, 2019 2:05 PM To: Open MPI User's List Cc:

Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-17 Thread Riebs, Andy
Thomas, your test case is somewhat similar to a bash fork() bomb -- not the same, but similar. After running one of your failing jobs, you might check to see if the “out-of-memory” (“OOM”) killer has been invoked. If it has, that can lead to unexpected consequences, such as what you’ve

Re: [OMPI users] Slurm binding not propagated to MPI jobs

2016-11-01 Thread Riebs, Andy
To close the thread here… I got the following information: Looking at SLURM_CPU_BIND is the right idea, but there are quite a few more options. It misses map_cpu, rank, plus the NUMA-based options: rank_ldom, map_ldom, and mask_ldom. See the srun man pages for documentation. From: Riebs

Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on Intel Xeon Phi/MIC

2015-04-12 Thread Riebs, Andy
My fault, I thought the tar ball name looked funny :-) Will try again tomorrow Andy -- Andy Riebs andy.ri...@hp.com Original message From: Ralph Castain Date:04/12/2015 3:10 PM (GMT-05:00) To: Open MPI Users Subject: Re: [OMPI users] Problems using Open MPI 1.8.4 OSHMEM on