you can investigate this
possible bug?
Thanks,
Kurt
From: users mailto:users-boun...@lists.open-mpi.org> > On Behalf Of Ralph Castain via users
Sent: Wednesday, November 3, 2021 8:45 AM
To: Open MPI Users mailto:users@lists.open-mpi.org> >
Cc: Ralph Castain mailto:r...@open-mpi.org>
des);
From: users On Behalf Of Ralph Castain via
users
Sent: Friday, November 5, 2021 9:50 AM
To: Open MPI Users
Cc: Ralph Castain
Subject: [EXTERNAL] Re: [OMPI users] Reserving slots and filling them after job
launch with MPI_Comm_spawn
Here is the problem:
[n022.cluster.com:30045<https:
Ralph Castain via users
Sent: Wednesday, November 3, 2021 11:58 AM
To: Open MPI Users mailto:users@lists.open-mpi.org> >
Cc: Ralph Castain mailto:r...@open-mpi.org> >
Subject: [EXTERNAL] Re: [OMPI users] Reserving slots and filling them after job
launch with MPI_Comm_spawn
Could you please en
AM
To: Open MPI Users mailto:users@lists.open-mpi.org> >
Cc: Ralph Castain mailto:r...@open-mpi.org> >
Subject: [EXTERNAL] Re: [OMPI users] Reserving slots and filling them after job
launch with MPI_Comm_spawn
Sounds like a bug to me - regardless of configuration, if the hostfil
To: Open MPI Users
Cc: Ralph Castain
Subject: [EXTERNAL] Re: [OMPI users] Reserving slots and filling them after job
launch with MPI_Comm_spawn
Sounds like a bug to me - regardless of configuration, if the hostfile contains
an entry for each slot on a node, OMPI should have added those up
Sounds like a bug to me - regardless of configuration, if the hostfile contains
an entry for each slot on a node, OMPI should have added those up.
On Nov 3, 2021, at 2:49 AM, Gilles Gouaillardet via users
mailto:users@lists.open-mpi.org> > wrote:
Kurt,
Assuming you built Open MPI with tm
Kurt,
Assuming you built Open MPI with tm support (default if tm is detected at
configure time, but you can configure --with-tm to have it abort if tm
support is not found), you should not need to use a hostfile.
As a workaround, I would suggest you try to
mpirun --map-by node -np 21 ...
I'm using OpenMPI 4.1.1 compiled with Nvidia's nvc++ 20.9, and compiled with
Torque support.
I want to reserve multiple slots on each node, and then launch a single manager
process on each node. The remaining slots would be filled up as the manager
spawns new processes with MPI_Comm_spawn on